Test Report: Hyper-V_Windows 18429

                    
                      ce47e36c27c610c668eed9e63157fcf5091ee2ba:2024-03-18:33630
                    
                

Test fail (14/210)

x
+
TestAddons/parallel/Registry (71.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 28.6894ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-59rlx" [b15348ce-750d-45d2-a9c4-bb54040d40dc] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0230356s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rzf2x" [39d3e807-26d5-411d-b2a3-11f2d029f106] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0261056s
addons_test.go:340: (dbg) Run:  kubectl --context addons-209500 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-209500 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-209500 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.8798777s)
addons_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-209500 ip
addons_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe -p addons-209500 ip: (2.9331826s)
addons_test.go:364: expected stderr to be -empty- but got: *"W0318 11:11:28.364154    3452 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-209500 ip"
2024/03/18 11:11:31 [DEBUG] GET http://172.30.141.150:5000
addons_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-209500 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p addons-209500 addons disable registry --alsologtostderr -v=1: (15.5362539s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-209500 -n addons-209500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-209500 -n addons-209500: (12.2955622s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-209500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-209500 logs -n 25: (9.1691986s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-366800 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:03 UTC |                     |
	|         | -p download-only-366800              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | --all                                | minikube             | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:03 UTC | 18 Mar 24 11:03 UTC |
	| delete  | -p download-only-366800              | download-only-366800 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:03 UTC | 18 Mar 24 11:03 UTC |
	| start   | -o=json --download-only              | download-only-878600 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:03 UTC |                     |
	|         | -p download-only-878600              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4         |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | --all                                | minikube             | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:04 UTC | 18 Mar 24 11:04 UTC |
	| delete  | -p download-only-878600              | download-only-878600 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:04 UTC | 18 Mar 24 11:04 UTC |
	| start   | -o=json --download-only              | download-only-330700 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:04 UTC |                     |
	|         | -p download-only-330700              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | --all                                | minikube             | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:04 UTC | 18 Mar 24 11:04 UTC |
	| delete  | -p download-only-330700              | download-only-330700 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:04 UTC | 18 Mar 24 11:04 UTC |
	| delete  | -p download-only-366800              | download-only-366800 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:04 UTC | 18 Mar 24 11:04 UTC |
	| delete  | -p download-only-878600              | download-only-878600 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:04 UTC | 18 Mar 24 11:04 UTC |
	| delete  | -p download-only-330700              | download-only-330700 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:04 UTC | 18 Mar 24 11:04 UTC |
	| start   | --download-only -p                   | binary-mirror-991800 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:04 UTC |                     |
	|         | binary-mirror-991800                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr                    |                      |                   |         |                     |                     |
	|         | --binary-mirror                      |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:50175               |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-991800              | binary-mirror-991800 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:04 UTC | 18 Mar 24 11:04 UTC |
	| addons  | disable dashboard -p                 | addons-209500        | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:04 UTC |                     |
	|         | addons-209500                        |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-209500        | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:04 UTC |                     |
	|         | addons-209500                        |                      |                   |         |                     |                     |
	| start   | -p addons-209500 --wait=true         | addons-209500        | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:04 UTC | 18 Mar 24 11:11 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --addons=registry                    |                      |                   |         |                     |                     |
	|         | --addons=metrics-server              |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |                   |         |                     |                     |
	|         | --addons=yakd --driver=hyperv        |                      |                   |         |                     |                     |
	|         | --addons=ingress                     |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |                   |         |                     |                     |
	| addons  | enable headlamp                      | addons-209500        | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:11 UTC | 18 Mar 24 11:11 UTC |
	|         | -p addons-209500                     |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-209500        | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:11 UTC | 18 Mar 24 11:11 UTC |
	|         | addons-209500                        |                      |                   |         |                     |                     |
	| addons  | addons-209500 addons disable         | addons-209500        | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:11 UTC | 18 Mar 24 11:11 UTC |
	|         | helm-tiller --alsologtostderr        |                      |                   |         |                     |                     |
	|         | -v=1                                 |                      |                   |         |                     |                     |
	| ip      | addons-209500 ip                     | addons-209500        | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:11 UTC | 18 Mar 24 11:11 UTC |
	| addons  | addons-209500 addons disable         | addons-209500        | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:11 UTC | 18 Mar 24 11:11 UTC |
	|         | registry --alsologtostderr           |                      |                   |         |                     |                     |
	|         | -v=1                                 |                      |                   |         |                     |                     |
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 11:04:44
	Running on machine: minikube3
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 11:04:44.141595    5404 out.go:291] Setting OutFile to fd 808 ...
	I0318 11:04:44.142640    5404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:04:44.142703    5404 out.go:304] Setting ErrFile to fd 512...
	I0318 11:04:44.142703    5404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:04:44.164889    5404 out.go:298] Setting JSON to false
	I0318 11:04:44.167107    5404 start.go:129] hostinfo: {"hostname":"minikube3","uptime":308461,"bootTime":1710451422,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0318 11:04:44.167107    5404 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 11:04:44.172163    5404 out.go:177] * [addons-209500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0318 11:04:44.176675    5404 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 11:04:44.176108    5404 notify.go:220] Checking for updates...
	I0318 11:04:44.178930    5404 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 11:04:44.181080    5404 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0318 11:04:44.184160    5404 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 11:04:44.186856    5404 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 11:04:44.190998    5404 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 11:04:49.571984    5404 out.go:177] * Using the hyperv driver based on user configuration
	I0318 11:04:49.575905    5404 start.go:297] selected driver: hyperv
	I0318 11:04:49.575986    5404 start.go:901] validating driver "hyperv" against <nil>
	I0318 11:04:49.575986    5404 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 11:04:49.623809    5404 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 11:04:49.624920    5404 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 11:04:49.624920    5404 cni.go:84] Creating CNI manager for ""
	I0318 11:04:49.624920    5404 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 11:04:49.626030    5404 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 11:04:49.626292    5404 start.go:340] cluster config:
	{Name:addons-209500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-209500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:04:49.626292    5404 iso.go:125] acquiring lock: {Name:mkefa7a887b9de67c22e0418da52288a5b488924 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 11:04:49.631939    5404 out.go:177] * Starting "addons-209500" primary control-plane node in "addons-209500" cluster
	I0318 11:04:49.634451    5404 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:04:49.634569    5404 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 11:04:49.634569    5404 cache.go:56] Caching tarball of preloaded images
	I0318 11:04:49.634569    5404 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 11:04:49.635249    5404 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 11:04:49.635287    5404 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\config.json ...
	I0318 11:04:49.636043    5404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\config.json: {Name:mk012c657c4bb4bfea4fde33d785df0da2e20bd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:04:49.637266    5404 start.go:360] acquireMachinesLock for addons-209500: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 11:04:49.637486    5404 start.go:364] duration metric: took 184.9µs to acquireMachinesLock for "addons-209500"
	I0318 11:04:49.637640    5404 start.go:93] Provisioning new machine with config: &{Name:addons-209500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:addons-209500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:04:49.637640    5404 start.go:125] createHost starting for "" (driver="hyperv")
	I0318 11:04:49.641428    5404 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0318 11:04:49.642024    5404 start.go:159] libmachine.API.Create for "addons-209500" (driver="hyperv")
	I0318 11:04:49.642128    5404 client.go:168] LocalClient.Create starting
	I0318 11:04:49.643209    5404 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0318 11:04:50.000988    5404 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0318 11:04:50.368077    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0318 11:04:52.495719    5404 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0318 11:04:52.495719    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:04:52.495881    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0318 11:04:54.213964    5404 main.go:141] libmachine: [stdout =====>] : False
	
	I0318 11:04:54.213964    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:04:54.214688    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:04:55.723313    5404 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:04:55.724381    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:04:55.724449    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:04:59.544717    5404 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:04:59.545334    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:04:59.547640    5404 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 11:04:59.942949    5404 main.go:141] libmachine: Creating SSH key...
	I0318 11:05:00.324691    5404 main.go:141] libmachine: Creating VM...
	I0318 11:05:00.324691    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:05:03.129794    5404 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:05:03.129917    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:05:03.130028    5404 main.go:141] libmachine: Using switch "Default Switch"
	I0318 11:05:03.130111    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:05:04.905503    5404 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:05:04.905503    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:05:04.905834    5404 main.go:141] libmachine: Creating VHD
	I0318 11:05:04.905834    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0318 11:05:08.617760    5404 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 701D490E-33DA-48AE-9AF2-7994A9C72369
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0318 11:05:08.618541    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:05:08.618541    5404 main.go:141] libmachine: Writing magic tar header
	I0318 11:05:08.618622    5404 main.go:141] libmachine: Writing SSH key tar header
	I0318 11:05:08.627875    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0318 11:05:11.789630    5404 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:05:11.790450    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:05:11.790505    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\disk.vhd' -SizeBytes 20000MB
	I0318 11:05:14.280390    5404 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:05:14.281247    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:05:14.281324    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-209500 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0318 11:05:17.949444    5404 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-209500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0318 11:05:17.949444    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:05:17.949789    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-209500 -DynamicMemoryEnabled $false
	I0318 11:05:20.197514    5404 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:05:20.198253    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:05:20.198322    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-209500 -Count 2
	I0318 11:05:22.398809    5404 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:05:22.399707    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:05:22.399707    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-209500 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\boot2docker.iso'
	I0318 11:05:24.966224    5404 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:05:24.966224    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:05:24.967300    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-209500 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\disk.vhd'
	I0318 11:05:27.611955    5404 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:05:27.612587    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:05:27.612587    5404 main.go:141] libmachine: Starting VM...
	I0318 11:05:27.612713    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-209500
	I0318 11:05:30.683561    5404 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:05:30.684224    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:05:30.684224    5404 main.go:141] libmachine: Waiting for host to start...
	I0318 11:05:30.684224    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:05:32.935931    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:05:32.936078    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:05:32.936078    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:05:35.441222    5404 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:05:35.441222    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:05:36.451446    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:05:38.633644    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:05:38.634147    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:05:38.634203    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:05:41.154940    5404 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:05:41.155457    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:05:42.170694    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:05:44.338933    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:05:44.338933    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:05:44.338933    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:05:46.874396    5404 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:05:46.875397    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:05:47.880613    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:05:50.049685    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:05:50.050529    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:05:50.050589    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:05:52.550249    5404 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:05:52.550341    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:05:53.557587    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:05:55.768081    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:05:55.768081    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:05:55.769106    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:05:58.280693    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:05:58.281340    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:05:58.281421    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:06:00.392093    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:06:00.392093    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:00.392093    5404 machine.go:94] provisionDockerMachine start ...
	I0318 11:06:00.392093    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:06:02.523605    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:06:02.523975    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:02.523975    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:06:05.038821    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:06:05.039068    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:05.044712    5404 main.go:141] libmachine: Using SSH client type: native
	I0318 11:06:05.054688    5404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.141.150 22 <nil> <nil>}
	I0318 11:06:05.054688    5404 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 11:06:05.188581    5404 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 11:06:05.188581    5404 buildroot.go:166] provisioning hostname "addons-209500"
	I0318 11:06:05.188752    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:06:07.319157    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:06:07.320023    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:07.320284    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:06:09.826005    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:06:09.826292    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:09.833310    5404 main.go:141] libmachine: Using SSH client type: native
	I0318 11:06:09.833914    5404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.141.150 22 <nil> <nil>}
	I0318 11:06:09.833914    5404 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-209500 && echo "addons-209500" | sudo tee /etc/hostname
	I0318 11:06:09.988272    5404 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-209500
	
	I0318 11:06:09.988272    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:06:12.152113    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:06:12.152113    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:12.152202    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:06:14.700842    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:06:14.700842    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:14.707785    5404 main.go:141] libmachine: Using SSH client type: native
	I0318 11:06:14.707785    5404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.141.150 22 <nil> <nil>}
	I0318 11:06:14.708348    5404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-209500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-209500/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-209500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 11:06:14.843466    5404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 11:06:14.843466    5404 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0318 11:06:14.843466    5404 buildroot.go:174] setting up certificates
	I0318 11:06:14.843466    5404 provision.go:84] configureAuth start
	I0318 11:06:14.843466    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:06:16.987865    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:06:16.987865    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:16.987865    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:06:19.536484    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:06:19.537537    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:19.537537    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:06:21.740731    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:06:21.740985    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:21.741118    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:06:24.284093    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:06:24.284093    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:24.284754    5404 provision.go:143] copyHostCerts
	I0318 11:06:24.284884    5404 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 11:06:24.286644    5404 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 11:06:24.288561    5404 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 11:06:24.289550    5404 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-209500 san=[127.0.0.1 172.30.141.150 addons-209500 localhost minikube]
	I0318 11:06:24.561223    5404 provision.go:177] copyRemoteCerts
	I0318 11:06:24.574077    5404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 11:06:24.574239    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:06:26.779923    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:06:26.779923    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:26.780001    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:06:29.374451    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:06:29.375267    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:29.375420    5404 sshutil.go:53] new ssh client: &{IP:172.30.141.150 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\id_rsa Username:docker}
	I0318 11:06:29.470175    5404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8959925s)
	I0318 11:06:29.470175    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 11:06:29.514370    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 11:06:29.559278    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 11:06:29.604313    5404 provision.go:87] duration metric: took 14.7607361s to configureAuth
	I0318 11:06:29.604313    5404 buildroot.go:189] setting minikube options for container-runtime
	I0318 11:06:29.604905    5404 config.go:182] Loaded profile config "addons-209500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:06:29.604967    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:06:31.715694    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:06:31.715694    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:31.716476    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:06:34.216853    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:06:34.216853    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:34.223246    5404 main.go:141] libmachine: Using SSH client type: native
	I0318 11:06:34.223246    5404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.141.150 22 <nil> <nil>}
	I0318 11:06:34.223246    5404 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 11:06:34.350074    5404 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 11:06:34.350271    5404 buildroot.go:70] root file system type: tmpfs
	I0318 11:06:34.350425    5404 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 11:06:34.350425    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:06:36.428014    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:06:36.428417    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:36.428543    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:06:38.952070    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:06:38.952070    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:38.959406    5404 main.go:141] libmachine: Using SSH client type: native
	I0318 11:06:38.960298    5404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.141.150 22 <nil> <nil>}
	I0318 11:06:38.960298    5404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 11:06:39.111970    5404 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 11:06:39.112593    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:06:41.218676    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:06:41.218676    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:41.219210    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:06:43.702413    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:06:43.703334    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:43.708272    5404 main.go:141] libmachine: Using SSH client type: native
	I0318 11:06:43.708442    5404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.141.150 22 <nil> <nil>}
	I0318 11:06:43.708442    5404 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 11:06:45.814067    5404 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 11:06:45.814067    5404 machine.go:97] duration metric: took 45.4216329s to provisionDockerMachine
	I0318 11:06:45.814067    5404 client.go:171] duration metric: took 1m56.1710681s to LocalClient.Create
	I0318 11:06:45.814067    5404 start.go:167] duration metric: took 1m56.171172s to libmachine.API.Create "addons-209500"
	I0318 11:06:45.814067    5404 start.go:293] postStartSetup for "addons-209500" (driver="hyperv")
	I0318 11:06:45.814067    5404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 11:06:45.825615    5404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 11:06:45.825615    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:06:47.919818    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:06:47.920380    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:47.920380    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:06:50.456349    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:06:50.456349    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:50.456685    5404 sshutil.go:53] new ssh client: &{IP:172.30.141.150 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\id_rsa Username:docker}
	I0318 11:06:50.558207    5404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7324891s)
	I0318 11:06:50.570514    5404 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 11:06:50.576962    5404 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 11:06:50.577061    5404 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0318 11:06:50.577477    5404 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0318 11:06:50.577776    5404 start.go:296] duration metric: took 4.7636729s for postStartSetup
	I0318 11:06:50.580530    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:06:52.674717    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:06:52.674717    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:52.674717    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:06:55.195619    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:06:55.195986    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:55.196196    5404 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\config.json ...
	I0318 11:06:55.199553    5404 start.go:128] duration metric: took 2m5.5609714s to createHost
	I0318 11:06:55.199553    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:06:57.277640    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:06:57.278560    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:57.278699    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:06:59.816960    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:06:59.817466    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:06:59.823316    5404 main.go:141] libmachine: Using SSH client type: native
	I0318 11:06:59.824026    5404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.141.150 22 <nil> <nil>}
	I0318 11:06:59.824026    5404 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 11:06:59.949465    5404 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710760019.942102899
	
	I0318 11:06:59.949465    5404 fix.go:216] guest clock: 1710760019.942102899
	I0318 11:06:59.949465    5404 fix.go:229] Guest: 2024-03-18 11:06:59.942102899 +0000 UTC Remote: 2024-03-18 11:06:55.1995535 +0000 UTC m=+131.230128001 (delta=4.742549399s)
	I0318 11:06:59.949465    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:07:02.055200    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:07:02.055200    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:07:02.056023    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:07:04.584505    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:07:04.584898    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:07:04.590412    5404 main.go:141] libmachine: Using SSH client type: native
	I0318 11:07:04.590828    5404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.141.150 22 <nil> <nil>}
	I0318 11:07:04.590828    5404 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710760019
	I0318 11:07:04.723590    5404 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 11:06:59 UTC 2024
	
	I0318 11:07:04.723679    5404 fix.go:236] clock set: Mon Mar 18 11:06:59 UTC 2024
	 (err=<nil>)
	I0318 11:07:04.723679    5404 start.go:83] releasing machines lock for "addons-209500", held for 2m15.0851798s
	I0318 11:07:04.723984    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:07:06.870036    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:07:06.870036    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:07:06.870627    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:07:09.508734    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:07:09.509060    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:07:09.513493    5404 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 11:07:09.513598    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:07:09.524028    5404 ssh_runner.go:195] Run: cat /version.json
	I0318 11:07:09.525033    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:07:11.754726    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:07:11.754726    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:07:11.754726    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:07:11.754726    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:07:11.754726    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:07:11.754726    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:07:14.429987    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:07:14.430885    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:07:14.431123    5404 sshutil.go:53] new ssh client: &{IP:172.30.141.150 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\id_rsa Username:docker}
	I0318 11:07:14.448627    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:07:14.448627    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:07:14.448627    5404 sshutil.go:53] new ssh client: &{IP:172.30.141.150 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\id_rsa Username:docker}
	I0318 11:07:14.519586    5404 ssh_runner.go:235] Completed: cat /version.json: (4.9955209s)
	I0318 11:07:14.531952    5404 ssh_runner.go:195] Run: systemctl --version
	I0318 11:07:14.659780    5404 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.146248s)
	I0318 11:07:14.674604    5404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 11:07:14.684012    5404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 11:07:14.695719    5404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 11:07:14.725471    5404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 11:07:14.725471    5404 start.go:494] detecting cgroup driver to use...
	I0318 11:07:14.726018    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:07:14.768675    5404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 11:07:14.798993    5404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 11:07:14.816851    5404 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 11:07:14.828098    5404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 11:07:14.856539    5404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:07:14.886834    5404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 11:07:14.915994    5404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:07:14.947357    5404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 11:07:14.979724    5404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 11:07:15.010382    5404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 11:07:15.039532    5404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 11:07:15.069613    5404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:07:15.267411    5404 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 11:07:15.303543    5404 start.go:494] detecting cgroup driver to use...
	I0318 11:07:15.315951    5404 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 11:07:15.351682    5404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:07:15.386322    5404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 11:07:15.429767    5404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:07:15.462914    5404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:07:15.496936    5404 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 11:07:15.561338    5404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:07:15.583827    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:07:15.627348    5404 ssh_runner.go:195] Run: which cri-dockerd
	I0318 11:07:15.645221    5404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 11:07:15.661900    5404 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 11:07:15.703735    5404 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 11:07:15.892482    5404 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 11:07:16.070262    5404 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 11:07:16.070484    5404 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 11:07:16.119629    5404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:07:16.312260    5404 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 11:07:18.809967    5404 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4975415s)
	I0318 11:07:18.822647    5404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 11:07:18.857766    5404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:07:18.892635    5404 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 11:07:19.093942    5404 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 11:07:19.283492    5404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:07:19.487294    5404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 11:07:19.527292    5404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:07:19.564023    5404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:07:19.767519    5404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 11:07:19.867381    5404 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 11:07:19.878402    5404 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 11:07:19.885958    5404 start.go:562] Will wait 60s for crictl version
	I0318 11:07:19.895963    5404 ssh_runner.go:195] Run: which crictl
	I0318 11:07:19.914037    5404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 11:07:19.987081    5404 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 11:07:19.997075    5404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:07:20.041880    5404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:07:20.085313    5404 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 11:07:20.085313    5404 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 11:07:20.090231    5404 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 11:07:20.090231    5404 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 11:07:20.090231    5404 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 11:07:20.090231    5404 ip.go:207] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:93:df:b5 Flags:up|broadcast|multicast|running}
	I0318 11:07:20.093979    5404 ip.go:210] interface addr: fe80::572b:6b1d:9130:e88b/64
	I0318 11:07:20.093979    5404 ip.go:210] interface addr: 172.30.128.1/20
	I0318 11:07:20.105759    5404 ssh_runner.go:195] Run: grep 172.30.128.1	host.minikube.internal$ /etc/hosts
	I0318 11:07:20.112368    5404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:07:20.134861    5404 kubeadm.go:877] updating cluster {Name:addons-209500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:addons-209500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.141.150 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 11:07:20.134976    5404 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:07:20.144802    5404 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 11:07:20.169494    5404 docker.go:685] Got preloaded images: 
	I0318 11:07:20.169564    5404 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0318 11:07:20.181564    5404 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 11:07:20.212956    5404 ssh_runner.go:195] Run: which lz4
	I0318 11:07:20.231865    5404 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 11:07:20.238122    5404 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 11:07:20.238122    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0318 11:07:22.148325    5404 docker.go:649] duration metric: took 1.9288227s to copy over tarball
	I0318 11:07:22.159324    5404 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 11:07:29.380224    5404 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (7.2208462s)
	I0318 11:07:29.380731    5404 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 11:07:29.461900    5404 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 11:07:29.480817    5404 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0318 11:07:29.525928    5404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:07:29.742645    5404 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 11:07:35.248497    5404 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.5058102s)
	I0318 11:07:35.259486    5404 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 11:07:35.288046    5404 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 11:07:35.288145    5404 cache_images.go:84] Images are preloaded, skipping loading
	I0318 11:07:35.288256    5404 kubeadm.go:928] updating node { 172.30.141.150 8443 v1.28.4 docker true true} ...
	I0318 11:07:35.288529    5404 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-209500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.141.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-209500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 11:07:35.298495    5404 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 11:07:35.331802    5404 cni.go:84] Creating CNI manager for ""
	I0318 11:07:35.331802    5404 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 11:07:35.331802    5404 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 11:07:35.331802    5404 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.30.141.150 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-209500 NodeName:addons-209500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.30.141.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.30.141.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 11:07:35.331802    5404 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.30.141.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-209500"
	  kubeletExtraArgs:
	    node-ip: 172.30.141.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.30.141.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 11:07:35.345469    5404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 11:07:35.363571    5404 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 11:07:35.375095    5404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 11:07:35.391000    5404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0318 11:07:35.419932    5404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 11:07:35.449288    5404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0318 11:07:35.490676    5404 ssh_runner.go:195] Run: grep 172.30.141.150	control-plane.minikube.internal$ /etc/hosts
	I0318 11:07:35.496758    5404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.141.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:07:35.528104    5404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:07:35.718632    5404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:07:35.750522    5404 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500 for IP: 172.30.141.150
	I0318 11:07:35.750645    5404 certs.go:194] generating shared ca certs ...
	I0318 11:07:35.750696    5404 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:07:35.751227    5404 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0318 11:07:36.129467    5404 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt ...
	I0318 11:07:36.129467    5404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt: {Name:mk1d1f25727e6fcaf35d7d74de783ad2d2c6be81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:07:36.131515    5404 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key ...
	I0318 11:07:36.131515    5404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key: {Name:mkffeaed7182692572a4aaea1f77b60f45c78854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:07:36.132386    5404 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0318 11:07:36.460759    5404 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0318 11:07:36.460759    5404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkc09bedb222360a1dcc92648b423932b0197d96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:07:36.462848    5404 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key ...
	I0318 11:07:36.462848    5404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk23d29d7cc073007c63c291d9cf6fa322998d26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:07:36.464772    5404 certs.go:256] generating profile certs ...
	I0318 11:07:36.465732    5404 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.key
	I0318 11:07:36.465732    5404 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt with IP's: []
	I0318 11:07:36.681406    5404 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt ...
	I0318 11:07:36.681406    5404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: {Name:mk73d608ca66c43317aafd1e4981fd3fcef0f130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:07:36.682693    5404 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.key ...
	I0318 11:07:36.682693    5404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.key: {Name:mk675568a9ab5e77a38ec07ac28a402f83ed0973 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:07:36.683870    5404 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\apiserver.key.bf2d225e
	I0318 11:07:36.684625    5404 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\apiserver.crt.bf2d225e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.30.141.150]
	I0318 11:07:36.806484    5404 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\apiserver.crt.bf2d225e ...
	I0318 11:07:36.806484    5404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\apiserver.crt.bf2d225e: {Name:mka05464b56152d52e6fb9d81b36759d787e182f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:07:36.807880    5404 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\apiserver.key.bf2d225e ...
	I0318 11:07:36.807880    5404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\apiserver.key.bf2d225e: {Name:mk73e33280058095b74576c19bb9592736030d85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:07:36.808376    5404 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\apiserver.crt.bf2d225e -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\apiserver.crt
	I0318 11:07:36.818655    5404 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\apiserver.key.bf2d225e -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\apiserver.key
	I0318 11:07:36.819483    5404 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\proxy-client.key
	I0318 11:07:36.819483    5404 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\proxy-client.crt with IP's: []
	I0318 11:07:36.910349    5404 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\proxy-client.crt ...
	I0318 11:07:36.910349    5404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\proxy-client.crt: {Name:mk052999c0f6e0fffedb46647cfe0332185031a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:07:36.910899    5404 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\proxy-client.key ...
	I0318 11:07:36.911994    5404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\proxy-client.key: {Name:mk2f8f04d6b765d07b3f3b14ca2a7265f480731d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:07:36.921347    5404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0318 11:07:36.922025    5404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 11:07:36.922203    5404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 11:07:36.922203    5404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 11:07:36.923779    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 11:07:36.967649    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0318 11:07:37.008889    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 11:07:37.050906    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 11:07:37.091186    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0318 11:07:37.135743    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 11:07:37.181849    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 11:07:37.224411    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 11:07:37.267499    5404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 11:07:37.308530    5404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 11:07:37.353134    5404 ssh_runner.go:195] Run: openssl version
	I0318 11:07:37.370778    5404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 11:07:37.402171    5404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:07:37.408091    5404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 11:07 /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:07:37.420768    5404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:07:37.440313    5404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 11:07:37.468333    5404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 11:07:37.473866    5404 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 11:07:37.473866    5404 kubeadm.go:391] StartCluster: {Name:addons-209500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:addons-209500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.141.150 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:07:37.483522    5404 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 11:07:37.516506    5404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 11:07:37.548369    5404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 11:07:37.576666    5404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 11:07:37.594096    5404 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 11:07:37.594238    5404 kubeadm.go:156] found existing configuration files:
	
	I0318 11:07:37.606566    5404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 11:07:37.621173    5404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 11:07:37.633137    5404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 11:07:37.661174    5404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 11:07:37.677862    5404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 11:07:37.691066    5404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 11:07:37.717514    5404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 11:07:37.735775    5404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 11:07:37.746031    5404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 11:07:37.780763    5404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 11:07:37.797534    5404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 11:07:37.809385    5404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 11:07:37.825717    5404 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 11:07:38.084132    5404 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 11:07:53.251641    5404 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 11:07:53.251855    5404 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 11:07:53.252092    5404 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 11:07:53.252445    5404 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 11:07:53.252445    5404 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 11:07:53.252445    5404 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 11:07:53.256088    5404 out.go:204]   - Generating certificates and keys ...
	I0318 11:07:53.256373    5404 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 11:07:53.256598    5404 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 11:07:53.256598    5404 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 11:07:53.256598    5404 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 11:07:53.256598    5404 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 11:07:53.256598    5404 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 11:07:53.257203    5404 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 11:07:53.257383    5404 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-209500 localhost] and IPs [172.30.141.150 127.0.0.1 ::1]
	I0318 11:07:53.257383    5404 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 11:07:53.257945    5404 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-209500 localhost] and IPs [172.30.141.150 127.0.0.1 ::1]
	I0318 11:07:53.257992    5404 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 11:07:53.258234    5404 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 11:07:53.258350    5404 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 11:07:53.258350    5404 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 11:07:53.258350    5404 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 11:07:53.258350    5404 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 11:07:53.258900    5404 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 11:07:53.259072    5404 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 11:07:53.259072    5404 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 11:07:53.259072    5404 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 11:07:53.261952    5404 out.go:204]   - Booting up control plane ...
	I0318 11:07:53.261952    5404 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 11:07:53.261952    5404 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 11:07:53.261952    5404 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 11:07:53.262954    5404 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 11:07:53.262954    5404 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 11:07:53.262954    5404 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 11:07:53.262954    5404 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 11:07:53.263953    5404 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.003055 seconds
	I0318 11:07:53.263953    5404 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 11:07:53.263953    5404 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 11:07:53.263953    5404 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 11:07:53.264976    5404 kubeadm.go:309] [mark-control-plane] Marking the node addons-209500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 11:07:53.264976    5404 kubeadm.go:309] [bootstrap-token] Using token: nq63r7.aj04k0g2pptxrsac
	I0318 11:07:53.266959    5404 out.go:204]   - Configuring RBAC rules ...
	I0318 11:07:53.267976    5404 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 11:07:53.267976    5404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 11:07:53.267976    5404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 11:07:53.267976    5404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 11:07:53.268962    5404 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 11:07:53.268962    5404 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 11:07:53.268962    5404 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 11:07:53.268962    5404 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 11:07:53.268962    5404 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 11:07:53.268962    5404 kubeadm.go:309] 
	I0318 11:07:53.268962    5404 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 11:07:53.268962    5404 kubeadm.go:309] 
	I0318 11:07:53.268962    5404 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 11:07:53.268962    5404 kubeadm.go:309] 
	I0318 11:07:53.269974    5404 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 11:07:53.269974    5404 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 11:07:53.269974    5404 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 11:07:53.269974    5404 kubeadm.go:309] 
	I0318 11:07:53.269974    5404 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 11:07:53.269974    5404 kubeadm.go:309] 
	I0318 11:07:53.269974    5404 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 11:07:53.269974    5404 kubeadm.go:309] 
	I0318 11:07:53.269974    5404 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 11:07:53.270944    5404 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 11:07:53.270944    5404 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 11:07:53.270944    5404 kubeadm.go:309] 
	I0318 11:07:53.270944    5404 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 11:07:53.270944    5404 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 11:07:53.270944    5404 kubeadm.go:309] 
	I0318 11:07:53.270944    5404 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token nq63r7.aj04k0g2pptxrsac \
	I0318 11:07:53.270944    5404 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 \
	I0318 11:07:53.270944    5404 kubeadm.go:309] 	--control-plane 
	I0318 11:07:53.271941    5404 kubeadm.go:309] 
	I0318 11:07:53.271941    5404 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 11:07:53.271941    5404 kubeadm.go:309] 
	I0318 11:07:53.271941    5404 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token nq63r7.aj04k0g2pptxrsac \
	I0318 11:07:53.271941    5404 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 
	I0318 11:07:53.271941    5404 cni.go:84] Creating CNI manager for ""
	I0318 11:07:53.271941    5404 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 11:07:53.275955    5404 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 11:07:53.292982    5404 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 11:07:53.325192    5404 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 11:07:53.377550    5404 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 11:07:53.392600    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-209500 minikube.k8s.io/updated_at=2024_03_18T11_07_53_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=addons-209500 minikube.k8s.io/primary=true
	I0318 11:07:53.393622    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:07:53.409972    5404 ops.go:34] apiserver oom_adj: -16
	I0318 11:07:53.566599    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:07:54.068350    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:07:54.574233    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:07:57.849420    5404 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.275162s)
	I0318 11:07:57.866977    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:07:58.109822    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:07:58.568603    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:07:59.069788    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:07:59.576791    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:08:00.074802    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:08:00.575675    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:08:01.079851    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:08:01.569741    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:08:02.075529    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:08:02.577142    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:08:03.089882    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:08:03.571603    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:08:04.076420    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:08:04.575717    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:08:05.068695    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:08:05.578805    5404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:08:05.730543    5404 kubeadm.go:1107] duration metric: took 12.3527784s to wait for elevateKubeSystemPrivileges
	W0318 11:08:05.730690    5404 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 11:08:05.730782    5404 kubeadm.go:393] duration metric: took 28.2567041s to StartCluster
	I0318 11:08:05.730871    5404 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:08:05.731159    5404 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 11:08:05.732177    5404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:08:05.733840    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 11:08:05.734064    5404 start.go:234] Will wait 6m0s for node &{Name: IP:172.30.141.150 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:08:05.734130    5404 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0318 11:08:05.734160    5404 addons.go:69] Setting yakd=true in profile "addons-209500"
	I0318 11:08:05.734160    5404 addons.go:234] Setting addon yakd=true in "addons-209500"
	I0318 11:08:05.734160    5404 addons.go:69] Setting cloud-spanner=true in profile "addons-209500"
	I0318 11:08:05.734160    5404 addons.go:234] Setting addon cloud-spanner=true in "addons-209500"
	I0318 11:08:05.734160    5404 host.go:66] Checking if "addons-209500" exists ...
	I0318 11:08:05.734160    5404 host.go:66] Checking if "addons-209500" exists ...
	I0318 11:08:05.734160    5404 config.go:182] Loaded profile config "addons-209500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:08:05.734768    5404 addons.go:69] Setting default-storageclass=true in profile "addons-209500"
	I0318 11:08:05.734871    5404 addons.go:69] Setting metrics-server=true in profile "addons-209500"
	I0318 11:08:05.734871    5404 addons.go:69] Setting storage-provisioner=true in profile "addons-209500"
	I0318 11:08:05.734871    5404 addons.go:234] Setting addon metrics-server=true in "addons-209500"
	I0318 11:08:05.734871    5404 addons.go:234] Setting addon storage-provisioner=true in "addons-209500"
	I0318 11:08:05.734871    5404 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-209500"
	I0318 11:08:05.734871    5404 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-209500"
	I0318 11:08:05.734871    5404 host.go:66] Checking if "addons-209500" exists ...
	I0318 11:08:05.735059    5404 host.go:66] Checking if "addons-209500" exists ...
	I0318 11:08:05.735134    5404 host.go:66] Checking if "addons-209500" exists ...
	I0318 11:08:05.738869    5404 out.go:177] * Verifying Kubernetes components...
	I0318 11:08:05.734871    5404 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-209500"
	I0318 11:08:05.734768    5404 addons.go:69] Setting gcp-auth=true in profile "addons-209500"
	I0318 11:08:05.734160    5404 addons.go:69] Setting ingress-dns=true in profile "addons-209500"
	I0318 11:08:05.734871    5404 addons.go:69] Setting helm-tiller=true in profile "addons-209500"
	I0318 11:08:05.734871    5404 addons.go:69] Setting ingress=true in profile "addons-209500"
	I0318 11:08:05.734871    5404 addons.go:69] Setting registry=true in profile "addons-209500"
	I0318 11:08:05.734871    5404 addons.go:69] Setting inspektor-gadget=true in profile "addons-209500"
	I0318 11:08:05.734871    5404 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-209500"
	I0318 11:08:05.735373    5404 addons.go:69] Setting volumesnapshots=true in profile "addons-209500"
	I0318 11:08:05.734768    5404 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-209500"
	I0318 11:08:05.738276    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:05.738276    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:05.738276    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:05.738276    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:05.738798    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:05.741861    5404 addons.go:234] Setting addon registry=true in "addons-209500"
	I0318 11:08:05.742025    5404 mustload.go:65] Loading cluster: addons-209500
	I0318 11:08:05.742109    5404 addons.go:234] Setting addon ingress=true in "addons-209500"
	I0318 11:08:05.742109    5404 addons.go:234] Setting addon ingress-dns=true in "addons-209500"
	I0318 11:08:05.742191    5404 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-209500"
	I0318 11:08:05.742240    5404 host.go:66] Checking if "addons-209500" exists ...
	I0318 11:08:05.742447    5404 host.go:66] Checking if "addons-209500" exists ...
	I0318 11:08:05.742844    5404 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-209500"
	I0318 11:08:05.742025    5404 host.go:66] Checking if "addons-209500" exists ...
	I0318 11:08:05.743120    5404 addons.go:234] Setting addon volumesnapshots=true in "addons-209500"
	I0318 11:08:05.743120    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:05.743120    5404 host.go:66] Checking if "addons-209500" exists ...
	I0318 11:08:05.743565    5404 config.go:182] Loaded profile config "addons-209500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:08:05.743565    5404 addons.go:234] Setting addon inspektor-gadget=true in "addons-209500"
	I0318 11:08:05.743565    5404 host.go:66] Checking if "addons-209500" exists ...
	I0318 11:08:05.742240    5404 host.go:66] Checking if "addons-209500" exists ...
	I0318 11:08:05.743565    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:05.743565    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:05.742025    5404 addons.go:234] Setting addon helm-tiller=true in "addons-209500"
	I0318 11:08:05.743565    5404 host.go:66] Checking if "addons-209500" exists ...
	I0318 11:08:05.744553    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:05.744553    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:05.745574    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:05.745574    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:05.746563    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:05.746563    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:05.746563    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:05.769565    5404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:08:06.671808    5404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.30.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 11:08:07.013808    5404 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.2442342s)
	I0318 11:08:07.035813    5404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:08:12.227259    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:12.227259    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:12.232284    5404 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0318 11:08:12.235790    5404 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0318 11:08:12.235790    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0318 11:08:12.237378    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:12.265372    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:12.265372    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:12.278370    5404 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0318 11:08:12.266371    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:12.278370    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:12.292372    5404 out.go:177]   - Using image docker.io/registry:2.8.3
	I0318 11:08:12.299677    5404 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0318 11:08:12.299254    5404 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0318 11:08:12.305794    5404 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.30.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.6339433s)
	I0318 11:08:12.305794    5404 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.2699411s)
	I0318 11:08:12.312115    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0318 11:08:12.312794    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:12.314795    5404 start.go:948] {"host.minikube.internal": 172.30.128.1} host record injected into CoreDNS's ConfigMap
	I0318 11:08:12.319796    5404 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0318 11:08:12.323826    5404 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0318 11:08:12.328821    5404 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0318 11:08:12.328821    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0318 11:08:12.328821    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:12.331812    5404 node_ready.go:35] waiting up to 6m0s for node "addons-209500" to be "Ready" ...
	I0318 11:08:12.383815    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:12.383815    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:12.389797    5404 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0318 11:08:12.399786    5404 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0318 11:08:12.399786    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0318 11:08:12.399786    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:12.403790    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:12.403790    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:12.412796    5404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0318 11:08:12.407380    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:12.424379    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:12.425074    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:12.425074    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:12.429960    5404 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0318 11:08:12.427068    5404 addons.go:234] Setting addon default-storageclass=true in "addons-209500"
	I0318 11:08:12.427068    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:12.439584    5404 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0318 11:08:12.439584    5404 host.go:66] Checking if "addons-209500" exists ...
	I0318 11:08:12.445587    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:12.450780    5404 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0318 11:08:12.446585    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0318 11:08:12.445587    5404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0318 11:08:12.447581    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:12.451578    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:12.454583    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:12.460031    5404 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0318 11:08:12.470584    5404 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 11:08:12.470584    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 11:08:12.470584    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:12.493577    5404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0318 11:08:12.460623    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:12.460623    5404 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0318 11:08:12.488587    5404 node_ready.go:49] node "addons-209500" has status "Ready":"True"
	I0318 11:08:12.501585    5404 node_ready.go:38] duration metric: took 169.7715ms for node "addons-209500" to be "Ready" ...
	I0318 11:08:12.501585    5404 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:08:12.507600    5404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0318 11:08:12.513587    5404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0318 11:08:12.517578    5404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0318 11:08:12.515581    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0318 11:08:12.525583    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:12.530585    5404 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0318 11:08:12.538593    5404 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0318 11:08:12.543577    5404 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0318 11:08:12.543577    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0318 11:08:12.543577    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:12.561970    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:12.561970    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:12.561970    5404 host.go:66] Checking if "addons-209500" exists ...
	I0318 11:08:12.654644    5404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5dvkx" in "kube-system" namespace to be "Ready" ...
	I0318 11:08:12.693863    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:12.694863    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:12.696864    5404 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-209500"
	I0318 11:08:12.697865    5404 host.go:66] Checking if "addons-209500" exists ...
	I0318 11:08:12.698862    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:12.783376    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:12.783376    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:12.789397    5404 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0318 11:08:12.795184    5404 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0318 11:08:12.795184    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0318 11:08:12.795389    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:12.970682    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:12.970682    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:12.970682    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:12.970682    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:12.975170    5404 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 11:08:12.978172    5404 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0318 11:08:12.981208    5404 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0318 11:08:12.981208    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0318 11:08:12.981208    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:12.980184    5404 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 11:08:12.992155    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 11:08:12.992155    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:13.083824    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:13.083929    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:13.086843    5404 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0318 11:08:13.092260    5404 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0318 11:08:13.092319    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0318 11:08:13.092440    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:13.357859    5404 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-209500" context rescaled to 1 replicas
	I0318 11:08:15.582437    5404 pod_ready.go:102] pod "coredns-5dd5756b68-5dvkx" in "kube-system" namespace has status "Ready":"False"
	I0318 11:08:17.745989    5404 pod_ready.go:102] pod "coredns-5dd5756b68-5dvkx" in "kube-system" namespace has status "Ready":"False"
	I0318 11:08:18.214043    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:18.214043    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:18.214043    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:08:18.220049    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:18.220049    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:18.220049    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:08:18.248063    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:18.248063    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:18.248063    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:08:18.456568    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:18.456568    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:18.456568    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:08:18.641722    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:18.642713    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:18.642713    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:08:18.792728    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:18.792728    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:18.792728    5404 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 11:08:18.793730    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 11:08:18.793730    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:18.873944    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:18.873944    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:18.873944    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:08:18.938388    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:18.939441    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:18.954361    5404 out.go:177]   - Using image docker.io/busybox:stable
	I0318 11:08:18.965835    5404 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0318 11:08:18.964811    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:18.980169    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:18.980169    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:08:18.980169    5404 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0318 11:08:18.980169    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0318 11:08:18.980169    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:19.552405    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:19.552405    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:19.552405    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:08:19.734861    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:19.734861    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:19.735863    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:08:19.765067    5404 pod_ready.go:102] pod "coredns-5dd5756b68-5dvkx" in "kube-system" namespace has status "Ready":"False"
	I0318 11:08:19.902083    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:19.902083    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:19.902083    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:08:20.123576    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:20.123576    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:20.123576    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:08:20.163584    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:20.163584    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:20.163584    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:08:22.033628    5404 pod_ready.go:102] pod "coredns-5dd5756b68-5dvkx" in "kube-system" namespace has status "Ready":"False"
	I0318 11:08:22.461887    5404 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0318 11:08:22.461887    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:22.508406    5404 pod_ready.go:92] pod "coredns-5dd5756b68-5dvkx" in "kube-system" namespace has status "Ready":"True"
	I0318 11:08:22.508406    5404 pod_ready.go:81] duration metric: took 9.853688s for pod "coredns-5dd5756b68-5dvkx" in "kube-system" namespace to be "Ready" ...
	I0318 11:08:22.509000    5404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rcmp8" in "kube-system" namespace to be "Ready" ...
	I0318 11:08:22.893249    5404 pod_ready.go:92] pod "coredns-5dd5756b68-rcmp8" in "kube-system" namespace has status "Ready":"True"
	I0318 11:08:22.893249    5404 pod_ready.go:81] duration metric: took 384.2462ms for pod "coredns-5dd5756b68-rcmp8" in "kube-system" namespace to be "Ready" ...
	I0318 11:08:22.893249    5404 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-209500" in "kube-system" namespace to be "Ready" ...
	I0318 11:08:22.963290    5404 pod_ready.go:92] pod "etcd-addons-209500" in "kube-system" namespace has status "Ready":"True"
	I0318 11:08:22.963290    5404 pod_ready.go:81] duration metric: took 70.0399ms for pod "etcd-addons-209500" in "kube-system" namespace to be "Ready" ...
	I0318 11:08:22.963480    5404 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-209500" in "kube-system" namespace to be "Ready" ...
	I0318 11:08:23.071898    5404 pod_ready.go:92] pod "kube-apiserver-addons-209500" in "kube-system" namespace has status "Ready":"True"
	I0318 11:08:23.071898    5404 pod_ready.go:81] duration metric: took 108.4167ms for pod "kube-apiserver-addons-209500" in "kube-system" namespace to be "Ready" ...
	I0318 11:08:23.071898    5404 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-209500" in "kube-system" namespace to be "Ready" ...
	I0318 11:08:23.123902    5404 pod_ready.go:92] pod "kube-controller-manager-addons-209500" in "kube-system" namespace has status "Ready":"True"
	I0318 11:08:23.123902    5404 pod_ready.go:81] duration metric: took 52.0033ms for pod "kube-controller-manager-addons-209500" in "kube-system" namespace to be "Ready" ...
	I0318 11:08:23.123902    5404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ztmnx" in "kube-system" namespace to be "Ready" ...
	I0318 11:08:23.151889    5404 pod_ready.go:92] pod "kube-proxy-ztmnx" in "kube-system" namespace has status "Ready":"True"
	I0318 11:08:23.151889    5404 pod_ready.go:81] duration metric: took 27.9869ms for pod "kube-proxy-ztmnx" in "kube-system" namespace to be "Ready" ...
	I0318 11:08:23.151889    5404 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-209500" in "kube-system" namespace to be "Ready" ...
	I0318 11:08:23.182523    5404 pod_ready.go:92] pod "kube-scheduler-addons-209500" in "kube-system" namespace has status "Ready":"True"
	I0318 11:08:23.182523    5404 pod_ready.go:81] duration metric: took 30.6342ms for pod "kube-scheduler-addons-209500" in "kube-system" namespace to be "Ready" ...
	I0318 11:08:23.182523    5404 pod_ready.go:38] duration metric: took 10.6808578s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:08:23.182523    5404 api_server.go:52] waiting for apiserver process to appear ...
	I0318 11:08:23.208622    5404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 11:08:23.366619    5404 api_server.go:72] duration metric: took 17.6323263s to wait for apiserver process to appear ...
	I0318 11:08:23.366619    5404 api_server.go:88] waiting for apiserver healthz status ...
	I0318 11:08:23.366619    5404 api_server.go:253] Checking apiserver healthz at https://172.30.141.150:8443/healthz ...
	I0318 11:08:23.449725    5404 api_server.go:279] https://172.30.141.150:8443/healthz returned 200:
	ok
	I0318 11:08:23.469177    5404 api_server.go:141] control plane version: v1.28.4
	I0318 11:08:23.469177    5404 api_server.go:131] duration metric: took 102.5575ms to wait for apiserver health ...
	I0318 11:08:23.469261    5404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 11:08:23.511039    5404 system_pods.go:59] 7 kube-system pods found
	I0318 11:08:23.511039    5404 system_pods.go:61] "coredns-5dd5756b68-5dvkx" [4159b45e-23f8-40b5-bbff-154eb0f999fb] Running
	I0318 11:08:23.511039    5404 system_pods.go:61] "coredns-5dd5756b68-rcmp8" [f78226f9-13e9-4a03-a0b0-60cbc11130d1] Running
	I0318 11:08:23.511039    5404 system_pods.go:61] "etcd-addons-209500" [4c5abb77-af37-46d5-b6a8-5e96e1708a17] Running
	I0318 11:08:23.511039    5404 system_pods.go:61] "kube-apiserver-addons-209500" [0ea652af-df51-44f9-b70c-c7b36177a1fe] Running
	I0318 11:08:23.511039    5404 system_pods.go:61] "kube-controller-manager-addons-209500" [79ad5928-bfd2-453c-80b1-31afd45f2de5] Running
	I0318 11:08:23.511039    5404 system_pods.go:61] "kube-proxy-ztmnx" [31e3524f-db6f-49fe-bf45-03f05d37a84f] Running
	I0318 11:08:23.511039    5404 system_pods.go:61] "kube-scheduler-addons-209500" [83db452d-697f-4810-828c-7aa03d62be78] Running
	I0318 11:08:23.511039    5404 system_pods.go:74] duration metric: took 41.7769ms to wait for pod list to return data ...
	I0318 11:08:23.511039    5404 default_sa.go:34] waiting for default service account to be created ...
	I0318 11:08:23.548117    5404 default_sa.go:45] found service account: "default"
	I0318 11:08:23.548117    5404 default_sa.go:55] duration metric: took 37.0784ms for default service account to be created ...
	I0318 11:08:23.548117    5404 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 11:08:23.679673    5404 system_pods.go:86] 7 kube-system pods found
	I0318 11:08:23.679673    5404 system_pods.go:89] "coredns-5dd5756b68-5dvkx" [4159b45e-23f8-40b5-bbff-154eb0f999fb] Running
	I0318 11:08:23.679673    5404 system_pods.go:89] "coredns-5dd5756b68-rcmp8" [f78226f9-13e9-4a03-a0b0-60cbc11130d1] Running
	I0318 11:08:23.679673    5404 system_pods.go:89] "etcd-addons-209500" [4c5abb77-af37-46d5-b6a8-5e96e1708a17] Running
	I0318 11:08:23.679673    5404 system_pods.go:89] "kube-apiserver-addons-209500" [0ea652af-df51-44f9-b70c-c7b36177a1fe] Running
	I0318 11:08:23.679673    5404 system_pods.go:89] "kube-controller-manager-addons-209500" [79ad5928-bfd2-453c-80b1-31afd45f2de5] Running
	I0318 11:08:23.679673    5404 system_pods.go:89] "kube-proxy-ztmnx" [31e3524f-db6f-49fe-bf45-03f05d37a84f] Running
	I0318 11:08:23.679673    5404 system_pods.go:89] "kube-scheduler-addons-209500" [83db452d-697f-4810-828c-7aa03d62be78] Running
	I0318 11:08:23.679673    5404 system_pods.go:126] duration metric: took 131.5546ms to wait for k8s-apps to be running ...
	I0318 11:08:23.679673    5404 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 11:08:23.703689    5404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 11:08:23.836673    5404 system_svc.go:56] duration metric: took 156.9988ms WaitForService to wait for kubelet
	I0318 11:08:23.836673    5404 kubeadm.go:576] duration metric: took 18.1023768s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 11:08:23.836673    5404 node_conditions.go:102] verifying NodePressure condition ...
	I0318 11:08:23.852674    5404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:08:23.852674    5404 node_conditions.go:123] node cpu capacity is 2
	I0318 11:08:23.852674    5404 node_conditions.go:105] duration metric: took 16.0013ms to run NodePressure ...
	I0318 11:08:23.852674    5404 start.go:240] waiting for startup goroutines ...
	I0318 11:08:25.086837    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:25.087033    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:25.087033    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:08:25.243451    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:08:25.243451    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:25.243451    5404 sshutil.go:53] new ssh client: &{IP:172.30.141.150 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\id_rsa Username:docker}
	I0318 11:08:25.319453    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:08:25.319453    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:25.319621    5404 sshutil.go:53] new ssh client: &{IP:172.30.141.150 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\id_rsa Username:docker}
	I0318 11:08:25.396574    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:08:25.397076    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:25.398850    5404 sshutil.go:53] new ssh client: &{IP:172.30.141.150 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\id_rsa Username:docker}
	I0318 11:08:25.534121    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:08:25.534180    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:25.534180    5404 sshutil.go:53] new ssh client: &{IP:172.30.141.150 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\id_rsa Username:docker}
	I0318 11:08:25.556928    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:25.556928    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:25.556928    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:08:25.620406    5404 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0318 11:08:25.620954    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0318 11:08:25.664553    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:08:25.664553    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:25.664553    5404 sshutil.go:53] new ssh client: &{IP:172.30.141.150 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\id_rsa Username:docker}
	I0318 11:08:25.725777    5404 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0318 11:08:25.725777    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0318 11:08:25.827366    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:08:25.827366    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:25.827715    5404 sshutil.go:53] new ssh client: &{IP:172.30.141.150 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\id_rsa Username:docker}
	I0318 11:08:25.860260    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0318 11:08:25.893232    5404 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0318 11:08:25.893232    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0318 11:08:25.912596    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:08:25.912748    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:25.913008    5404 sshutil.go:53] new ssh client: &{IP:172.30.141.150 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\id_rsa Username:docker}
	I0318 11:08:25.922902    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0318 11:08:26.016110    5404 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 11:08:26.016110    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0318 11:08:26.132695    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:08:26.132695    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:26.133428    5404 sshutil.go:53] new ssh client: &{IP:172.30.141.150 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\id_rsa Username:docker}
	I0318 11:08:26.226709    5404 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0318 11:08:26.226791    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0318 11:08:26.287021    5404 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 11:08:26.287091    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 11:08:26.351152    5404 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0318 11:08:26.353355    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0318 11:08:26.407966    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:08:26.408459    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:26.408704    5404 sshutil.go:53] new ssh client: &{IP:172.30.141.150 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\id_rsa Username:docker}
	I0318 11:08:26.501424    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:08:26.501658    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:26.501953    5404 sshutil.go:53] new ssh client: &{IP:172.30.141.150 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\id_rsa Username:docker}
	I0318 11:08:26.572463    5404 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 11:08:26.572558    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 11:08:26.615294    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:08:26.615727    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:26.615943    5404 sshutil.go:53] new ssh client: &{IP:172.30.141.150 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\id_rsa Username:docker}
	I0318 11:08:26.618494    5404 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0318 11:08:26.618494    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0318 11:08:26.639461    5404 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0318 11:08:26.639461    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0318 11:08:26.647581    5404 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0318 11:08:26.647581    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0318 11:08:26.810215    5404 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0318 11:08:26.810215    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0318 11:08:26.832200    5404 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0318 11:08:26.832200    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0318 11:08:26.901080    5404 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0318 11:08:26.901080    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0318 11:08:26.902029    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:26.902993    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:26.902993    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:08:26.905222    5404 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0318 11:08:26.906230    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0318 11:08:26.916255    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 11:08:26.944227    5404 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0318 11:08:26.944227    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0318 11:08:26.955232    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0318 11:08:27.007978    5404 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0318 11:08:27.007978    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0318 11:08:27.051744    5404 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0318 11:08:27.051744    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0318 11:08:27.073488    5404 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0318 11:08:27.073488    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0318 11:08:27.096384    5404 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0318 11:08:27.096384    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0318 11:08:27.172749    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0318 11:08:27.193148    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0318 11:08:27.266098    5404 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0318 11:08:27.266098    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0318 11:08:27.273094    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0318 11:08:27.305108    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0318 11:08:27.307091    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0318 11:08:27.369162    5404 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0318 11:08:27.369225    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0318 11:08:27.455452    5404 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0318 11:08:27.455546    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0318 11:08:27.597086    5404 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0318 11:08:27.597086    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0318 11:08:27.686983    5404 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0318 11:08:27.686983    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0318 11:08:27.787989    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:08:27.788073    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:27.788381    5404 sshutil.go:53] new ssh client: &{IP:172.30.141.150 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\id_rsa Username:docker}
	I0318 11:08:27.827410    5404 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0318 11:08:27.827469    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0318 11:08:27.971678    5404 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0318 11:08:27.971753    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0318 11:08:28.097928    5404 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0318 11:08:28.098051    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0318 11:08:28.238248    5404 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0318 11:08:28.238374    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0318 11:08:28.402144    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 11:08:28.422200    5404 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0318 11:08:28.422291    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0318 11:08:28.545255    5404 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0318 11:08:28.545255    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0318 11:08:28.582968    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:08:28.582968    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:28.583168    5404 sshutil.go:53] new ssh client: &{IP:172.30.141.150 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\id_rsa Username:docker}
	I0318 11:08:28.691605    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:08:28.692544    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:28.692918    5404 sshutil.go:53] new ssh client: &{IP:172.30.141.150 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\id_rsa Username:docker}
	I0318 11:08:28.719004    5404 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0318 11:08:28.719105    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0318 11:08:28.877232    5404 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0318 11:08:28.877290    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0318 11:08:29.011831    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0318 11:08:29.109175    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0318 11:08:29.110523    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 11:08:29.334474    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0318 11:08:29.778419    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:08:29.778460    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:29.778712    5404 sshutil.go:53] new ssh client: &{IP:172.30.141.150 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\id_rsa Username:docker}
	I0318 11:08:30.687727    5404 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0318 11:08:31.143526    5404 addons.go:234] Setting addon gcp-auth=true in "addons-209500"
	I0318 11:08:31.143526    5404 host.go:66] Checking if "addons-209500" exists ...
	I0318 11:08:31.145412    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:31.565186    5404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.6414448s)
	I0318 11:08:31.565384    5404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.7049925s)
	I0318 11:08:31.565384    5404 addons.go:470] Verifying addon registry=true in "addons-209500"
	I0318 11:08:31.569050    5404 out.go:177] * Verifying registry addon...
	I0318 11:08:31.574004    5404 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0318 11:08:31.608205    5404 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0318 11:08:31.608338    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:32.088153    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:32.586787    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:33.275330    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:33.627592    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:33.738393    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:33.738393    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:33.751393    5404 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0318 11:08:33.751393    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-209500 ).state
	I0318 11:08:34.147557    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:34.964871    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:35.153801    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:35.670796    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:36.187028    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:36.310170    5404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:08:36.310170    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:36.310435    5404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-209500 ).networkadapters[0]).ipaddresses[0]
	I0318 11:08:37.054238    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:37.317702    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:37.714304    5404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.7589906s)
	I0318 11:08:37.714304    5404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.7979677s)
	I0318 11:08:37.714392    5404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.5415637s)
	I0318 11:08:37.714392    5404 addons.go:470] Verifying addon metrics-server=true in "addons-209500"
	I0318 11:08:37.714562    5404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.4093752s)
	I0318 11:08:37.714392    5404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.5211656s)
	I0318 11:08:37.714562    5404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.4413273s)
	W0318 11:08:37.714562    5404 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0318 11:08:37.741549    5404 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-209500 service yakd-dashboard -n yakd-dashboard
	
	I0318 11:08:37.714774    5404 retry.go:31] will retry after 217.510321ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0318 11:08:37.719635    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:37.983381    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0318 11:08:38.137765    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:38.600963    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:39.091713    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:39.094508    5404 main.go:141] libmachine: [stdout =====>] : 172.30.141.150
	
	I0318 11:08:39.094564    5404 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:08:39.094774    5404 sshutil.go:53] new ssh client: &{IP:172.30.141.150 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-209500\id_rsa Username:docker}
	I0318 11:08:39.751713    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:40.046789    5404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (12.7393902s)
	I0318 11:08:40.046789    5404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.6445581s)
	I0318 11:08:40.046789    5404 addons.go:470] Verifying addon ingress=true in "addons-209500"
	I0318 11:08:40.046979    5404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (11.0349445s)
	I0318 11:08:40.051386    5404 out.go:177] * Verifying ingress addon...
	I0318 11:08:40.057161    5404 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0318 11:08:40.116793    5404 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0318 11:08:40.116793    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:40.138815    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:40.633834    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:40.644411    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:41.079745    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:41.166762    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:41.578367    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:41.599157    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:42.092489    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:42.145777    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:42.172490    5404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (13.0632166s)
	I0318 11:08:42.172490    5404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (12.8379198s)
	I0318 11:08:42.172490    5404 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-209500"
	I0318 11:08:42.176517    5404 out.go:177] * Verifying csi-hostpath-driver addon...
	I0318 11:08:42.172490    5404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (13.061869s)
	I0318 11:08:42.172490    5404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.1890772s)
	I0318 11:08:42.172490    5404 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (8.4210333s)
	I0318 11:08:42.182493    5404 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0318 11:08:42.181500    5404 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0318 11:08:42.186500    5404 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0318 11:08:42.188496    5404 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0318 11:08:42.189500    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	W0318 11:08:42.224404    5404 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I0318 11:08:42.238580    5404 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0318 11:08:42.238643    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0318 11:08:42.240857    5404 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0318 11:08:42.240857    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:42.281667    5404 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0318 11:08:42.281749    5404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0318 11:08:42.344928    5404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0318 11:08:42.569449    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:42.621855    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:42.705149    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:43.075775    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:43.083785    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:43.209291    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:43.570436    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:43.592328    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:43.721980    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:44.108231    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:44.132108    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:44.147338    5404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.8023413s)
	I0318 11:08:44.157337    5404 addons.go:470] Verifying addon gcp-auth=true in "addons-209500"
	I0318 11:08:44.161663    5404 out.go:177] * Verifying gcp-auth addon...
	I0318 11:08:44.166087    5404 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0318 11:08:44.214975    5404 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0318 11:08:44.214975    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:44.251001    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:44.565216    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:44.580414    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:44.675777    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:44.707956    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:45.072342    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:45.087127    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:45.182355    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:45.201336    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:45.577892    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:45.582890    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:45.672969    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:45.706284    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:46.071569    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:46.087328    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:46.181442    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:46.204993    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:46.567203    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:46.581289    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:46.675085    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:46.694088    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:47.074747    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:47.079788    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:47.184610    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:47.201637    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:47.568306    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:47.582474    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:47.676913    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:47.695083    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:48.075847    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:48.080315    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:48.186787    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:48.193675    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:48.568929    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:48.584534    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:48.679308    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:48.696300    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:49.078505    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:49.081752    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:49.189664    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:49.199318    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:49.569005    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:49.582588    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:49.677511    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:49.694461    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:50.074007    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:50.089482    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:50.182009    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:50.200066    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:50.564711    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:50.596480    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:50.671822    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:50.704856    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:51.068963    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:51.083563    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:51.176287    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:51.193827    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:51.573780    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:51.589039    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:51.684265    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:51.703058    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:53.309648    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:53.313328    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:53.314972    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:53.316770    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:53.320272    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:53.324424    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:53.324985    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:53.325340    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:53.773724    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:53.777462    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:53.780298    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:53.781385    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:54.067579    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:54.084077    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:54.174578    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:54.194288    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:54.574096    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:54.587624    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:54.683520    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:54.700238    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:55.076200    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:55.081255    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:55.187077    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:55.197434    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:55.578909    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:55.582886    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:55.687616    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:55.694962    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:56.063880    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:56.098288    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:56.175419    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:56.211022    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:56.573363    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:56.588957    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:56.680851    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:56.701033    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:57.187983    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:57.187983    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:57.190619    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:57.197185    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:57.578088    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:57.583342    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:57.681519    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:57.697829    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:58.078870    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:58.083855    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:58.174780    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:58.208400    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:58.575358    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:58.580927    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:58.684094    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:58.700736    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:59.067639    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:59.083267    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:59.176273    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:59.206844    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:08:59.572902    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:08:59.585076    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:08:59.682896    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:08:59.697739    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:00.063905    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:00.096117    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:00.173845    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:00.206291    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:00.572119    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:00.587033    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:00.681804    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:00.697445    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:01.065344    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:01.095367    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:01.172914    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:01.206086    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:01.569948    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:01.585576    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:01.680405    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:01.699363    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:02.604397    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:02.606590    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:02.608125    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:02.612264    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:02.614884    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:02.617634    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:02.683330    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:02.701878    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:03.073823    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:03.089528    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:03.191012    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:03.199287    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:03.570459    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:03.585948    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:03.681456    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:03.700062    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:04.076756    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:04.081354    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:04.185587    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:04.202615    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:04.567628    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:04.581323    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:04.677415    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:04.695494    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:05.076470    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:05.081641    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:05.186147    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:05.205757    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:05.568705    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:05.582617    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:05.675720    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:05.959076    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:06.073018    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:06.088648    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:06.183264    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:06.199853    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:06.575804    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:06.582527    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:06.722162    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:06.741161    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:07.068173    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:07.096364    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:07.173776    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:07.209549    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:07.571195    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:07.587244    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:07.681782    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:07.698770    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:08.068580    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:08.083013    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:08.175493    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:08.208629    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:08.575158    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:08.581513    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:08.685621    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:08.701255    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:09.067517    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:09.082741    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:09.177040    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:09.195997    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:09.576760    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:09.582478    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:09.695325    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:09.702505    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:10.068694    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:10.083946    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:10.175804    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:10.194354    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:10.575607    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:10.581280    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:10.684707    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:10.701732    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:11.067623    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:11.081299    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:11.174750    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:11.211052    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:11.573971    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:11.589016    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:11.681456    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:11.701571    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:12.078867    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:12.082845    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:12.172663    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:12.205626    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:12.571281    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:12.590979    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:12.680206    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:12.696789    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:13.066514    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:13.095530    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:13.174474    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:13.208526    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:13.577688    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:13.597278    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:13.680563    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:13.697457    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:14.088848    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:14.093216    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:14.186927    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:14.194660    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:14.569169    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:14.582105    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:14.677280    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:14.694426    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:15.073939    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:15.088731    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:15.180649    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:15.199248    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:15.564709    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:15.596784    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:15.671264    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:15.705057    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:16.066717    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:16.089234    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:16.177842    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:16.194400    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:16.572169    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:16.586751    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:16.681399    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:16.698446    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:17.200389    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:17.201383    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:17.202380    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:17.207385    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:17.581047    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:17.586419    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:17.672769    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:17.706786    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:18.082798    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:18.088203    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:18.172003    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:18.208996    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:18.569271    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:18.588314    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:18.679663    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:18.697948    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:19.078764    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:19.083496    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 11:09:19.399545    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:19.400958    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:19.568154    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:19.583396    5404 kapi.go:107] duration metric: took 48.0090322s to wait for kubernetes.io/minikube-addons=registry ...
	I0318 11:09:19.677179    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:19.693922    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:20.075924    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:20.184764    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:20.203117    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:20.567382    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:20.677189    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:20.695243    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:21.076071    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:21.184448    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:21.200151    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:21.571387    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:21.676928    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:21.694815    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:22.076379    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:22.184514    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:22.203707    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:22.578311    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:22.684476    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:22.707038    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:23.079351    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:23.185957    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:23.203779    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:23.588876    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:23.676882    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:23.737225    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:24.077358    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:24.196048    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:24.204828    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:24.612715    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:24.681976    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:24.711810    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:25.073502    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:25.182169    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:25.200279    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:25.565687    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:25.670316    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:25.706757    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:26.070475    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:26.257106    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:26.259114    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:26.576691    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:26.671555    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:26.704758    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:27.068566    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:27.187552    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:27.194335    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:27.573297    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:27.682668    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:27.702764    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:28.067818    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:28.176097    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:28.195393    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:28.577081    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:28.684866    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:28.703260    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:29.067967    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:29.178812    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:29.196559    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:29.577267    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:29.684812    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:29.700898    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:30.066078    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:30.175160    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:30.207556    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:30.572822    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:30.683450    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:30.700321    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:31.065934    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:31.175883    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:31.210657    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:31.573802    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:31.673790    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:31.706345    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:32.073598    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:32.183209    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:32.200605    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:32.566377    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:32.676799    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:32.709681    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:33.075214    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:33.183037    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:33.200652    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:33.579197    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:33.673391    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:33.707615    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:34.070791    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:34.180132    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:34.197167    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:34.566836    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:34.676004    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:34.708715    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:35.072953    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:35.182773    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:35.201119    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:35.580609    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:35.674014    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:35.707097    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:36.073453    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:36.184851    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:36.201140    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:36.565405    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:36.674452    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:36.710267    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:37.076712    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:37.186461    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:37.204158    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:37.567804    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:37.677767    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:37.696505    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:38.077396    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:38.185986    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:38.203239    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:38.567273    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:38.694421    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:38.705546    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:39.075204    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:39.182490    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:39.201738    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:39.566296    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:39.675260    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:39.708798    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:40.074159    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:40.182721    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:40.201059    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:40.567125    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:40.675535    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:40.712482    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:41.075012    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:41.182199    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:41.207017    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:41.904112    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:41.904602    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:41.906580    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:42.613313    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:42.613512    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:42.620255    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:42.625163    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:42.957699    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:42.960466    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:43.150874    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:43.188645    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:43.200328    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:43.571224    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:43.685860    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:43.695146    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:44.073935    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:44.186073    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:44.200920    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:44.577341    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:44.685390    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:44.703779    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:45.065664    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:45.176334    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:45.194436    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:45.574903    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:45.683831    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:45.702343    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:46.065720    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:46.175659    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:46.210738    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:46.573527    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:46.683308    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:46.700139    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:47.563729    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:47.567230    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:47.567625    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:47.574044    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:47.758560    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:47.759980    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:48.069527    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:48.179634    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:48.205003    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:48.583173    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:48.684643    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:48.703920    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:49.073709    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:49.175031    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:49.213240    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:49.576147    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:49.686648    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:49.709746    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:50.064243    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:50.187251    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:50.194782    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:50.569772    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:50.678341    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:50.696352    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:51.078208    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:51.186135    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:51.203857    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:51.571346    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:51.680119    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:51.698948    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:52.074226    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:52.184244    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:52.201207    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:52.569683    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:52.673716    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:52.706997    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:53.074071    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:53.183366    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:53.199565    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:53.570981    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:53.681600    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:53.698386    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:54.069117    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:54.183640    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:54.206583    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:54.577407    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:54.672685    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:54.695318    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:55.075756    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:55.171835    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:55.207074    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:55.581222    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:55.682614    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:55.714971    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:56.402613    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:56.405865    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:56.409406    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:56.751610    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:56.752743    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:56.752743    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:57.103602    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:57.509688    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:57.513337    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:57.576527    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:57.686263    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:57.708360    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:58.075485    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:58.180339    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:58.206084    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:58.586005    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:58.685006    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:58.707159    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:59.086473    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:59.186416    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:59.208100    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:09:59.630413    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:09:59.680839    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:09:59.709735    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:00.085999    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:00.180046    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:00.213097    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:00.625666    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:00.699978    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:00.736409    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:01.072018    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:01.175165    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:01.194680    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:01.569602    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:01.673381    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:01.697381    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:02.072678    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:02.172097    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:02.224067    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:02.577763    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:02.685535    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:02.709495    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:03.069036    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:03.186195    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:03.209159    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:03.565267    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:03.679381    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:03.701101    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:04.070040    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:04.180158    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:04.205250    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:04.572282    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:04.897619    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:04.901436    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:05.076217    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:05.189137    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:05.196802    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:05.581446    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:05.680751    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:05.711811    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:06.071243    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:06.173223    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:06.197060    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:06.569362    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:06.694170    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:06.698261    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:07.077412    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:07.189959    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:07.197547    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:07.579226    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:07.682528    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:07.706342    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:08.067670    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:08.353405    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:08.698084    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:08.973806    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:08.976846    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:08.978867    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:09.072328    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:09.950651    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:09.951423    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:09.953559    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:09.957894    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:09.961338    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:10.067057    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:10.182642    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:10.207547    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:10.565220    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:10.692849    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:10.699258    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:11.078130    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:11.176841    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:11.198016    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:11.595897    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:11.690261    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:11.700047    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:12.069078    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:12.189380    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:12.195216    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:12.580984    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:12.678770    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:12.701982    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:13.151864    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:13.188355    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:13.200099    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:13.573396    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:13.685993    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:13.711019    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:14.073588    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:14.187602    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:14.199895    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:14.654685    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:14.678329    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:14.715455    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:15.069028    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:15.177096    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:15.201733    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:15.571690    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:15.676547    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:15.699772    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:16.068934    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:16.182465    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:16.202686    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:16.572799    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:16.689776    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:16.695447    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:17.076753    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:17.182550    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:17.203801    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:17.608093    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:17.688594    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:17.695062    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:18.070060    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:18.174329    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:18.194931    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:18.568253    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:18.675960    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:18.695823    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:19.068871    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:19.191538    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:19.208064    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:19.566505    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:19.679685    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:19.703203    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:20.076494    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:20.169755    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:20.201853    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:20.565327    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:20.674380    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:20.713137    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:21.093501    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:21.171684    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:21.194294    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:21.573442    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:21.673513    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:21.697228    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:22.068545    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:22.173253    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:22.196708    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:22.576690    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:22.676785    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:22.700355    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:23.081657    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:23.174741    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:23.214504    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:23.574651    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:23.682300    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:23.706935    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:24.085971    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:24.175480    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:24.199378    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:24.569102    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:24.673016    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:24.696788    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:25.074292    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:25.316833    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:25.317138    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:25.611245    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:25.696065    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:25.706699    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:26.066844    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:26.193226    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:26.199259    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:26.571118    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:26.689108    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:26.695591    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:27.076405    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:27.182564    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:27.202947    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:27.572093    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:27.673382    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:27.708230    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:28.073804    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:28.184590    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:28.203868    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:28.575076    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:28.683167    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:28.711048    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:29.249569    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:29.250721    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:29.252310    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:29.567128    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:29.674534    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:29.707385    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:30.072470    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:30.179511    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:30.199336    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:30.566765    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:30.687664    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:30.696373    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:31.081870    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:31.189214    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:31.199344    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:31.570956    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:31.684130    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:31.707943    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:32.507820    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:32.509101    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:32.511154    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:32.571822    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:32.688198    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:32.698467    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:33.077169    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:33.178435    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:33.197088    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:33.576648    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:33.689581    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:33.697277    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:34.066989    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:34.176440    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:34.207555    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:34.572227    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:34.685183    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:34.708409    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:35.077707    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:35.184843    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:35.206236    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:35.580157    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:35.682072    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:35.698976    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:36.067807    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:36.189768    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:36.196143    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:36.571889    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:36.687475    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:36.700623    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:37.077748    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:37.181307    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:37.200670    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:37.592980    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:37.679822    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:37.699651    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:38.069050    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:38.177040    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:38.203583    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:38.577638    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:38.682800    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:38.700471    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:39.066596    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:39.191461    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:39.199372    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:39.797953    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:39.797953    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:39.797953    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:40.076602    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:40.180219    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:40.197901    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:40.564971    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:40.674622    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:40.697271    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:41.084503    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:41.173260    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:41.221771    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:41.572760    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:41.686896    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:41.718708    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:42.075511    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:42.183221    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:42.206859    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:42.569018    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:42.679545    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:42.700188    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:43.075463    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:43.178769    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:43.206425    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:43.571787    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:43.687108    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:43.711199    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:44.068294    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:44.193200    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:44.196095    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:44.586929    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:44.689230    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:44.698460    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:45.077688    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:45.193580    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:45.200666    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:45.575409    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:45.674965    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:45.696919    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:46.079220    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:46.504960    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:46.510553    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:46.649449    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:46.685759    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:46.744515    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:47.077758    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:47.190741    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:47.196845    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:47.574017    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:47.682798    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:47.705774    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:48.069797    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:48.185915    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:48.201540    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:48.568136    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:48.675376    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:48.696078    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:49.078189    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:49.168404    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:49.213179    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:49.582049    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:49.678289    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:49.708862    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:50.097949    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:50.176744    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:50.197907    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:50.576422    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:50.745088    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:50.747344    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:51.082161    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:51.180631    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:51.204581    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:51.912421    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:51.912822    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:51.914842    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:52.081541    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:52.193506    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:52.204249    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:52.575884    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:52.682494    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:52.703872    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 11:10:53.070988    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:53.172445    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:53.213307    5404 kapi.go:107] duration metric: took 2m11.0307919s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0318 11:10:53.578414    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:53.681561    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:54.076868    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:54.200189    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:54.569232    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:54.692948    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:55.081564    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:55.200613    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:55.573111    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:55.684479    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:56.071423    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:56.183405    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:56.574577    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:56.678048    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:57.071924    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:57.191338    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:57.581210    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:57.678017    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:58.070279    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:58.349573    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:58.703503    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:58.704672    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:59.206397    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:10:59.207581    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:59.580847    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:10:59.673377    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:00.066033    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:11:00.178777    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:00.579229    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:11:00.680237    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:01.075454    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:11:01.182876    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:01.567694    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:11:01.674770    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:02.069088    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:11:02.194159    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:02.860140    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:02.860943    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:11:03.412165    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:03.412543    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:11:03.605443    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:11:03.680088    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:04.071393    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:11:04.186398    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:04.569292    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:11:04.693346    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:05.078753    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:11:05.187855    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:05.584435    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:11:05.692006    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:06.095183    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:11:06.185368    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:06.569479    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:11:06.681844    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:07.070145    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:11:07.182876    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:07.856587    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:07.858998    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:11:08.352922    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:11:08.355629    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:08.578747    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:11:08.686841    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:09.078200    5404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 11:11:09.180377    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:09.582366    5404 kapi.go:107] duration metric: took 2m29.5241198s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0318 11:11:09.681843    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:10.199872    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:10.688895    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:11.173709    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:11.806465    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:12.184896    5404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 11:11:12.684291    5404 kapi.go:107] duration metric: took 2m28.5170926s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0318 11:11:12.688008    5404 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-209500 cluster.
	I0318 11:11:12.693308    5404 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0318 11:11:12.695036    5404 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0318 11:11:12.698245    5404 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, metrics-server, ingress-dns, helm-tiller, yakd, storage-provisioner, inspektor-gadget, volumesnapshots, storage-provisioner-rancher, registry, csi-hostpath-driver, ingress, gcp-auth
	I0318 11:11:12.702871    5404 addons.go:505] duration metric: took 3m6.9674067s for enable addons: enabled=[cloud-spanner nvidia-device-plugin metrics-server ingress-dns helm-tiller yakd storage-provisioner inspektor-gadget volumesnapshots storage-provisioner-rancher registry csi-hostpath-driver ingress gcp-auth]
	I0318 11:11:12.702871    5404 start.go:245] waiting for cluster config update ...
	I0318 11:11:12.702871    5404 start.go:254] writing updated cluster config ...
	I0318 11:11:12.716872    5404 ssh_runner.go:195] Run: rm -f paused
	I0318 11:11:12.954289    5404 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 11:11:12.957445    5404 out.go:177] * Done! kubectl is now configured to use "addons-209500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 18 11:11:46 addons-209500 dockerd[1334]: time="2024-03-18T11:11:46.902962323Z" level=info msg="shim disconnected" id=fb857d77b827ab2007b4c05850b2ec8610daaca99235e9569a8dd8e7cc40b6ed namespace=moby
	Mar 18 11:11:46 addons-209500 dockerd[1334]: time="2024-03-18T11:11:46.903024025Z" level=warning msg="cleaning up after shim disconnected" id=fb857d77b827ab2007b4c05850b2ec8610daaca99235e9569a8dd8e7cc40b6ed namespace=moby
	Mar 18 11:11:46 addons-209500 dockerd[1334]: time="2024-03-18T11:11:46.903035025Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 11:11:46 addons-209500 dockerd[1334]: time="2024-03-18T11:11:46.958050876Z" level=warning msg="cleanup warnings time=\"2024-03-18T11:11:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Mar 18 11:11:47 addons-209500 cri-dockerd[1223]: time="2024-03-18T11:11:47Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-rzf2x_kube-system\": unexpected command output nsenter: cannot open /proc/4690/ns/net: No such file or directory\n with error: exit status 1"
	Mar 18 11:11:47 addons-209500 dockerd[1334]: time="2024-03-18T11:11:47.109556279Z" level=info msg="shim disconnected" id=b7a157703941b476ca5febeda88b6209aad9928732902873ad31585c1690dbe0 namespace=moby
	Mar 18 11:11:47 addons-209500 dockerd[1334]: time="2024-03-18T11:11:47.109637380Z" level=warning msg="cleaning up after shim disconnected" id=b7a157703941b476ca5febeda88b6209aad9928732902873ad31585c1690dbe0 namespace=moby
	Mar 18 11:11:47 addons-209500 dockerd[1334]: time="2024-03-18T11:11:47.109652080Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 11:11:47 addons-209500 dockerd[1328]: time="2024-03-18T11:11:47.110976300Z" level=info msg="ignoring event" container=b7a157703941b476ca5febeda88b6209aad9928732902873ad31585c1690dbe0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 11:11:52 addons-209500 cri-dockerd[1223]: time="2024-03-18T11:11:52Z" level=info msg="Pulling image ghcr.io/headlamp-k8s/headlamp:v0.23.0@sha256:94e00732e1b43057a9135dafc7483781aea4a73a26cec449ed19f4d8794308d5: 423085796267: Extracting [==>                                                ]   2.13MB/41.72MB"
	Mar 18 11:11:56 addons-209500 cri-dockerd[1223]: time="2024-03-18T11:11:56Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.23.0@sha256:94e00732e1b43057a9135dafc7483781aea4a73a26cec449ed19f4d8794308d5: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:94e00732e1b43057a9135dafc7483781aea4a73a26cec449ed19f4d8794308d5"
	Mar 18 11:11:56 addons-209500 dockerd[1334]: time="2024-03-18T11:11:56.466015309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:11:56 addons-209500 dockerd[1334]: time="2024-03-18T11:11:56.468245458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:11:56 addons-209500 dockerd[1334]: time="2024-03-18T11:11:56.469149478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:11:56 addons-209500 dockerd[1334]: time="2024-03-18T11:11:56.469543187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:11:56 addons-209500 dockerd[1334]: time="2024-03-18T11:11:56.619498593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:11:56 addons-209500 dockerd[1334]: time="2024-03-18T11:11:56.619650997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:11:56 addons-209500 dockerd[1334]: time="2024-03-18T11:11:56.619668297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:11:56 addons-209500 dockerd[1334]: time="2024-03-18T11:11:56.619809100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:11:56 addons-209500 cri-dockerd[1223]: time="2024-03-18T11:11:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f7add3a818fff04afe071b10b22006c9b8a9c28a094a03bea47884dc504d6c30/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 18 11:12:01 addons-209500 cri-dockerd[1223]: time="2024-03-18T11:12:01Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Mar 18 11:12:01 addons-209500 dockerd[1334]: time="2024-03-18T11:12:01.875717591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:12:01 addons-209500 dockerd[1334]: time="2024-03-18T11:12:01.876485209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:12:01 addons-209500 dockerd[1334]: time="2024-03-18T11:12:01.876542410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:12:01 addons-209500 dockerd[1334]: time="2024-03-18T11:12:01.888122087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	bff27fdac8761       nginx@sha256:02d8d94023878cedf3e3acc55372932a9ba1478b6e2f3357786d916c2af743ba                                                                6 seconds ago        Running             nginx                                    0                   c5c92e066c390       nginx
	6d735a5d36aca       ghcr.io/headlamp-k8s/headlamp@sha256:94e00732e1b43057a9135dafc7483781aea4a73a26cec449ed19f4d8794308d5                                        11 seconds ago       Running             headlamp                                 0                   790b0ad305317       headlamp-5485c556b-h2gvx
	f2692d249c973       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 56 seconds ago       Running             gcp-auth                                 0                   119a86df9b113       gcp-auth-7d69788767-sd7vq
	b069a716ae04a       registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c                             About a minute ago   Running             controller                               0                   ec766b15220d4       ingress-nginx-controller-76dc478dd8-4vnmq
	e4857915710ce       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   3324c1cf344b0       csi-hostpathplugin-cvwkl
	ff124065ee604       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   3324c1cf344b0       csi-hostpathplugin-cvwkl
	1e5d28a3e2c80       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   3324c1cf344b0       csi-hostpathplugin-cvwkl
	43d42cab5c754       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   3324c1cf344b0       csi-hostpathplugin-cvwkl
	490defc753771       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   3324c1cf344b0       csi-hostpathplugin-cvwkl
	a622841902594       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              About a minute ago   Running             csi-resizer                              0                   6b5024098cb97       csi-hostpath-resizer-0
	89d36a9b3ad36       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             About a minute ago   Running             csi-attacher                             0                   8ef757d48bcfa       csi-hostpath-attacher-0
	bfee89aa8a082       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   About a minute ago   Running             csi-external-health-monitor-controller   0                   3324c1cf344b0       csi-hostpathplugin-cvwkl
	ae196e2a53547       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334                   About a minute ago   Exited              patch                                    0                   3ddac6631d0f2       ingress-nginx-admission-patch-7zdkj
	e323d1b3efab4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334                   About a minute ago   Exited              create                                   0                   5159f5afb77bb       ingress-nginx-admission-create-25xxj
	808e07e9f99c9       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       About a minute ago   Running             local-path-provisioner                   0                   1c8139d594524       local-path-provisioner-78b46b4d5c-jc825
	b429194936205       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   c223bed646595       snapshot-controller-58dbcc7b99-ktr8z
	4ba0f0a76f876       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   90369f68c041c       snapshot-controller-58dbcc7b99-b4tjl
	ccffb76460651       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   9b5f3a5c4fca6       yakd-dashboard-9947fc6bf-stbs5
	a892ed015d79c       registry.k8s.io/metrics-server/metrics-server@sha256:1c0419326500f1704af580d12a579671b2c3a06a8aa918cd61d0a35fb2d6b3ce                        2 minutes ago        Running             metrics-server                           0                   7b47d7ae719f6       metrics-server-69cf46c98-6sxxw
	f50bc52214ac8       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             2 minutes ago        Running             minikube-ingress-dns                     0                   77c32aad3e490       kube-ingress-dns-minikube
	f5d02aac374a9       gcr.io/cloud-spanner-emulator/emulator@sha256:41d5dccfcf13817a2348beba0ca7c650ffdd795f7fcbe975b7822c9eed262e15                               3 minutes ago        Running             cloud-spanner-emulator                   0                   e8888494e7544       cloud-spanner-emulator-6548d5df46-tm9bk
	64a4ed00e8cbe       nvcr.io/nvidia/k8s-device-plugin@sha256:50aa9517d771e3b0ffa7fded8f1e988dba680a7ff5efce162ce31d1b5ec043e2                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   2a09034b64f22       nvidia-device-plugin-daemonset-kb5lj
	ac57dd1f8419f       6e38f40d628db                                                                                                                                3 minutes ago        Running             storage-provisioner                      0                   07704b942368b       storage-provisioner
	6ca6ad22d3741       ead0a4a53df89                                                                                                                                3 minutes ago        Running             coredns                                  0                   aee93ef3c9677       coredns-5dd5756b68-5dvkx
	f128e7d298ac5       83f6cc407eed8                                                                                                                                3 minutes ago        Running             kube-proxy                               0                   1c97f159fecb9       kube-proxy-ztmnx
	94c506199db3c       73deb9a3f7025                                                                                                                                4 minutes ago        Running             etcd                                     0                   31890bef965f3       etcd-addons-209500
	d497388cccb1e       e3db313c6dbc0                                                                                                                                4 minutes ago        Running             kube-scheduler                           0                   8cd6f4041123d       kube-scheduler-addons-209500
	b3e71aec31bc4       7fe0e6f37db33                                                                                                                                4 minutes ago        Running             kube-apiserver                           0                   82a8bc73b70f6       kube-apiserver-addons-209500
	ecd6c9cd6be4b       d058aa5ab969c                                                                                                                                4 minutes ago        Running             kube-controller-manager                  0                   a2e10b3b1280e       kube-controller-manager-addons-209500
	
	
	==> controller_ingress [b069a716ae04] <==
	I0318 11:11:10.183462       8 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-76dc478dd8-4vnmq", UID:"71c4337a-ea85-4d9f-917a-9b0493959f03", APIVersion:"v1", ResourceVersion:"736", FieldPath:""}): type: 'Warning' reason: 'RELOAD' Error reloading NGINX: exit status 1
	2024/03/18 11:11:10 [notice] 26#26: signal process started
	2024/03/18 11:11:10 [error] 26#26: invalid PID number "" in "/tmp/nginx/nginx.pid"
	nginx: [error] invalid PID number "" in "/tmp/nginx/nginx.pid"
	I0318 11:11:13.428657       8 controller.go:190] "Configuration changes detected, backend reload required"
	I0318 11:11:13.470289       8 controller.go:210] "Backend successfully reloaded"
	I0318 11:11:13.470420       8 controller.go:221] "Initial sync, sleeping for 1 second"
	I0318 11:11:13.470444       8 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-76dc478dd8-4vnmq", UID:"71c4337a-ea85-4d9f-917a-9b0493959f03", APIVersion:"v1", ResourceVersion:"736", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0318 11:11:41.401375       8 controller.go:1108] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0318 11:11:41.427989       8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.026s renderingIngressLength:1 renderingIngressTime:0.001s admissionTime:17.8kBs testedConfigurationSize:0.027}
	I0318 11:11:41.428246       8 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0318 11:11:41.441909       8 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	W0318 11:11:41.443498       8 controller.go:1108] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0318 11:11:41.443859       8 controller.go:190] "Configuration changes detected, backend reload required"
	I0318 11:11:41.447652       8 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"938b3401-1815-4edf-9b3b-7bd69cc5114d", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1422", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0318 11:11:41.542714       8 controller.go:210] "Backend successfully reloaded"
	I0318 11:11:41.544133       8 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-76dc478dd8-4vnmq", UID:"71c4337a-ea85-4d9f-917a-9b0493959f03", APIVersion:"v1", ResourceVersion:"736", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0318 11:11:44.777447       8 controller.go:1214] Service "default/nginx" does not have any active Endpoint.
	I0318 11:11:44.777595       8 controller.go:190] "Configuration changes detected, backend reload required"
	I0318 11:11:44.828902       8 controller.go:210] "Backend successfully reloaded"
	I0318 11:11:44.829897       8 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-76dc478dd8-4vnmq", UID:"71c4337a-ea85-4d9f-917a-9b0493959f03", APIVersion:"v1", ResourceVersion:"736", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0318 11:11:48.110546       8 controller.go:1214] Service "default/nginx" does not have any active Endpoint.
	W0318 11:11:51.445581       8 controller.go:1214] Service "default/nginx" does not have any active Endpoint.
	W0318 11:11:57.927743       8 controller.go:1214] Service "default/nginx" does not have any active Endpoint.
	W0318 11:12:01.261541       8 controller.go:1214] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [6ca6ad22d374] <==
	[INFO] plugin/reload: Running configuration SHA512 = fa2b120b3007aba3ef70c07d73473d14986404bfc5683cceeb89e8950d5ace894afca7d43365324975f99d1b2da3d6473c722fe82795d39a4ee8b4b969b7aa71
	[INFO] Reloading complete
	[INFO] 127.0.0.1:55390 - 29694 "HINFO IN 9039449414626635115.6356572476812970189. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048864076s
	[INFO] 10.244.0.7:53699 - 31693 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000357209s
	[INFO] 10.244.0.7:53699 - 2250 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000092702s
	[INFO] 10.244.0.7:53348 - 31562 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000246707s
	[INFO] 10.244.0.7:53348 - 12871 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00072112s
	[INFO] 10.244.0.7:57655 - 46200 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000200605s
	[INFO] 10.244.0.7:57655 - 30077 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00037851s
	[INFO] 10.244.0.7:34436 - 2703 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000190605s
	[INFO] 10.244.0.7:34436 - 53644 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000469513s
	[INFO] 10.244.0.7:51092 - 24903 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000140703s
	[INFO] 10.244.0.7:36816 - 31897 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000137804s
	[INFO] 10.244.0.7:33274 - 16360 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000066901s
	[INFO] 10.244.0.7:48008 - 58286 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00035671s
	[INFO] 10.244.0.22:45889 - 60964 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0002883s
	[INFO] 10.244.0.22:60161 - 21804 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000084099s
	[INFO] 10.244.0.22:37203 - 22156 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001481s
	[INFO] 10.244.0.22:56034 - 18101 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000079s
	[INFO] 10.244.0.22:41842 - 51873 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0001049s
	[INFO] 10.244.0.22:50725 - 11156 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000388499s
	[INFO] 10.244.0.22:59318 - 5758 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.001617997s
	[INFO] 10.244.0.22:43623 - 42937 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.002043995s
	[INFO] 10.244.0.24:39566 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00044361s
	[INFO] 10.244.0.24:38958 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000217905s
	
	
	==> describe nodes <==
	Name:               addons-209500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-209500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=addons-209500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T11_07_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-209500
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-209500"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 11:07:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-209500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 11:11:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 11:12:00 +0000   Mon, 18 Mar 2024 11:07:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 11:12:00 +0000   Mon, 18 Mar 2024 11:07:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 11:12:00 +0000   Mon, 18 Mar 2024 11:07:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 11:12:00 +0000   Mon, 18 Mar 2024 11:07:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.141.150
	  Hostname:    addons-209500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912876Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912876Ki
	  pods:               110
	System Info:
	  Machine ID:                 50b1bd530b494a98adeb88ff8b5c21fd
	  System UUID:                21499665-f6f5-a549-8aab-b579c335903b
	  Boot ID:                    d5ca5ff3-6cbb-43e2-a296-69b4b94e6ec0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (23 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-tm9bk      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     task-pv-pod                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	  gcp-auth                    gcp-auth-7d69788767-sd7vq                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	  headlamp                    headlamp-5485c556b-h2gvx                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  ingress-nginx               ingress-nginx-controller-76dc478dd8-4vnmq    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m28s
	  kube-system                 coredns-5dd5756b68-5dvkx                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m2s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 csi-hostpathplugin-cvwkl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 etcd-addons-209500                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-apiserver-addons-209500                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-controller-manager-addons-209500        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 kube-proxy-ztmnx                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-addons-209500                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 metrics-server-69cf46c98-6sxxw               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         3m32s
	  kube-system                 nvidia-device-plugin-daemonset-kb5lj         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 snapshot-controller-58dbcc7b99-b4tjl         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 snapshot-controller-58dbcc7b99-ktr8z         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  local-path-storage          local-path-provisioner-78b46b4d5c-jc825      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-stbs5               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             588Mi (15%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m51s                  kube-proxy       
	  Normal  Starting                 4m25s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m24s (x8 over 4m25s)  kubelet          Node addons-209500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m24s (x8 over 4m25s)  kubelet          Node addons-209500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m24s (x7 over 4m25s)  kubelet          Node addons-209500 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m14s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m14s                  kubelet          Node addons-209500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s                  kubelet          Node addons-209500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s                  kubelet          Node addons-209500 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m13s                  kubelet          Node addons-209500 status is now: NodeReady
	  Normal  RegisteredNode           4m3s                   node-controller  Node addons-209500 event: Registered Node addons-209500 in Controller
	
	
	==> dmesg <==
	[  +0.135721] kauditd_printk_skb: 62 callbacks suppressed
	[Mar18 11:08] systemd-fstab-generator[3368]: Ignoring "noauto" option for root device
	[  +0.647546] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.088642] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.528765] kauditd_printk_skb: 39 callbacks suppressed
	[  +0.251898] hrtimer: interrupt took 8268008 ns
	[  +7.337925] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.265227] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.004964] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.039918] kauditd_printk_skb: 60 callbacks suppressed
	[ +14.383468] kauditd_printk_skb: 88 callbacks suppressed
	[Mar18 11:09] kauditd_printk_skb: 4 callbacks suppressed
	[Mar18 11:10] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.867169] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.576688] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.814691] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.183755] kauditd_printk_skb: 10 callbacks suppressed
	[Mar18 11:11] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.737848] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.261301] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.145283] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.635907] kauditd_printk_skb: 46 callbacks suppressed
	[ +11.281433] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.000899] kauditd_printk_skb: 25 callbacks suppressed
	[Mar18 11:12] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [94c506199db3] <==
	{"level":"info","ts":"2024-03-18T11:11:55.904288Z","caller":"traceutil/trace.go:171","msg":"trace[1455912549] transaction","detail":"{read_only:false; response_revision:1502; number_of_response:1; }","duration":"542.39736ms","start":"2024-03-18T11:11:55.361872Z","end":"2024-03-18T11:11:55.904269Z","steps":["trace[1455912549] 'process raft request'  (duration: 233.09254ms)","trace[1455912549] 'compare'  (duration: 308.4119ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T11:11:55.904673Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T11:11:55.361834Z","time spent":"542.514462ms","remote":"127.0.0.1:52054","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1474 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:425 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"warn","ts":"2024-03-18T11:11:55.905109Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.609722ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-03-18T11:11:55.905155Z","caller":"traceutil/trace.go:171","msg":"trace[1306327797] range","detail":"{range_begin:/registry/persistentvolumes/; range_end:/registry/persistentvolumes0; response_count:0; response_revision:1502; }","duration":"209.715524ms","start":"2024-03-18T11:11:55.695429Z","end":"2024-03-18T11:11:55.905144Z","steps":["trace[1306327797] 'agreement among raft nodes before linearized reading'  (duration: 209.586821ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T11:11:55.905553Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.844156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-03-18T11:11:55.905673Z","caller":"traceutil/trace.go:171","msg":"trace[662203539] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1502; }","duration":"106.967759ms","start":"2024-03-18T11:11:55.798694Z","end":"2024-03-18T11:11:55.905662Z","steps":["trace[662203539] 'agreement among raft nodes before linearized reading'  (duration: 106.820155ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T11:11:56.202369Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.468607ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13388795221818985161 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1483 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-18T11:11:56.202709Z","caller":"traceutil/trace.go:171","msg":"trace[425341472] linearizableReadLoop","detail":"{readStateIndex:1578; appliedIndex:1577; }","duration":"262.820395ms","start":"2024-03-18T11:11:55.939875Z","end":"2024-03-18T11:11:56.202696Z","steps":["trace[425341472] 'read index received'  (duration: 107.903279ms)","trace[425341472] 'applied index is now lower than readState.Index'  (duration: 154.914916ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T11:11:56.203208Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.334219ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-03-18T11:11:56.203265Z","caller":"traceutil/trace.go:171","msg":"trace[325295403] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:0; response_revision:1503; }","duration":"191.398421ms","start":"2024-03-18T11:11:56.011857Z","end":"2024-03-18T11:11:56.203256Z","steps":["trace[325295403] 'agreement among raft nodes before linearized reading'  (duration: 191.163016ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T11:11:56.203388Z","caller":"traceutil/trace.go:171","msg":"trace[104881930] transaction","detail":"{read_only:false; response_revision:1503; number_of_response:1; }","duration":"290.930115ms","start":"2024-03-18T11:11:55.912445Z","end":"2024-03-18T11:11:56.203375Z","steps":["trace[104881930] 'process raft request'  (duration: 135.391686ms)","trace[104881930] 'compare'  (duration: 154.335003ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T11:11:56.203661Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.793016ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8651"}
	{"level":"info","ts":"2024-03-18T11:11:56.20375Z","caller":"traceutil/trace.go:171","msg":"trace[776828928] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1503; }","duration":"263.883518ms","start":"2024-03-18T11:11:55.939857Z","end":"2024-03-18T11:11:56.20374Z","steps":["trace[776828928] 'agreement among raft nodes before linearized reading'  (duration: 263.722215ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T11:11:56.203906Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.931218ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-03-18T11:11:56.203955Z","caller":"traceutil/trace.go:171","msg":"trace[880658995] range","detail":"{range_begin:/registry/csinodes/; range_end:/registry/csinodes0; response_count:0; response_revision:1503; }","duration":"145.984519ms","start":"2024-03-18T11:11:56.057963Z","end":"2024-03-18T11:11:56.203947Z","steps":["trace[880658995] 'agreement among raft nodes before linearized reading'  (duration: 145.822515ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T11:11:56.204445Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.158208ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8651"}
	{"level":"info","ts":"2024-03-18T11:11:56.204494Z","caller":"traceutil/trace.go:171","msg":"trace[2087793358] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1503; }","duration":"168.208709ms","start":"2024-03-18T11:11:56.036277Z","end":"2024-03-18T11:11:56.204486Z","steps":["trace[2087793358] 'agreement among raft nodes before linearized reading'  (duration: 168.122207ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T11:12:01.615835Z","caller":"traceutil/trace.go:171","msg":"trace[548571832] linearizableReadLoop","detail":"{readStateIndex:1600; appliedIndex:1599; }","duration":"205.77212ms","start":"2024-03-18T11:12:01.410045Z","end":"2024-03-18T11:12:01.615818Z","steps":["trace[548571832] 'read index received'  (duration: 205.614216ms)","trace[548571832] 'applied index is now lower than readState.Index'  (duration: 157.204µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T11:12:01.616524Z","caller":"traceutil/trace.go:171","msg":"trace[1728605625] transaction","detail":"{read_only:false; response_revision:1523; number_of_response:1; }","duration":"398.405827ms","start":"2024-03-18T11:12:01.218103Z","end":"2024-03-18T11:12:01.616509Z","steps":["trace[1728605625] 'process raft request'  (duration: 397.605907ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T11:12:01.617568Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T11:12:01.218046Z","time spent":"398.509229ms","remote":"127.0.0.1:52054","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":482,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1503 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:419 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-03-18T11:12:01.617981Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.090376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-03-18T11:12:01.618034Z","caller":"traceutil/trace.go:171","msg":"trace[462204984] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1523; }","duration":"208.143477ms","start":"2024-03-18T11:12:01.40988Z","end":"2024-03-18T11:12:01.618023Z","steps":["trace[462204984] 'agreement among raft nodes before linearized reading'  (duration: 208.055075ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T11:12:01.6185Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.107181ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.30.141.150\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-03-18T11:12:01.618533Z","caller":"traceutil/trace.go:171","msg":"trace[885247361] range","detail":"{range_begin:/registry/masterleases/172.30.141.150; range_end:; response_count:1; response_revision:1523; }","duration":"158.142882ms","start":"2024-03-18T11:12:01.460383Z","end":"2024-03-18T11:12:01.618526Z","steps":["trace[885247361] 'agreement among raft nodes before linearized reading'  (duration: 158.036979ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T11:12:01.76033Z","caller":"traceutil/trace.go:171","msg":"trace[315350032] transaction","detail":"{read_only:false; response_revision:1524; number_of_response:1; }","duration":"135.575042ms","start":"2024-03-18T11:12:01.624737Z","end":"2024-03-18T11:12:01.760312Z","steps":["trace[315350032] 'process raft request'  (duration: 135.46794ms)"],"step_count":1}
	
	
	==> gcp-auth [f2692d249c97] <==
	2024/03/18 11:11:12 GCP Auth Webhook started!
	2024/03/18 11:11:18 Ready to marshal response ...
	2024/03/18 11:11:18 Ready to write response ...
	2024/03/18 11:11:23 Ready to marshal response ...
	2024/03/18 11:11:23 Ready to write response ...
	2024/03/18 11:11:30 Ready to marshal response ...
	2024/03/18 11:11:30 Ready to write response ...
	2024/03/18 11:11:30 Ready to marshal response ...
	2024/03/18 11:11:30 Ready to write response ...
	2024/03/18 11:11:30 Ready to marshal response ...
	2024/03/18 11:11:30 Ready to write response ...
	2024/03/18 11:11:41 Ready to marshal response ...
	2024/03/18 11:11:41 Ready to write response ...
	2024/03/18 11:11:53 Ready to marshal response ...
	2024/03/18 11:11:53 Ready to write response ...
	
	
	==> kernel <==
	 11:12:07 up 6 min,  0 users,  load average: 2.91, 2.12, 0.96
	Linux addons-209500 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b3e71aec31bc] <==
	Trace[160015087]: ["List(recursive=true) etcd3" audit-id:50fc5edc-90ef-4cd2-a23d-4a0c77163186,key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: 760ms (11:10:09.189)]
	Trace[160015087]: [761.023663ms] [761.023663ms] END
	I0318 11:10:47.635452       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 11:10:52.061775       1 trace.go:236] Trace[1011490971]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.30.141.150,type:*v1.Endpoints,resource:apiServerIPInfo (18-Mar-2024 11:10:51.450) (total time: 611ms):
	Trace[1011490971]: ---"initial value restored" 458ms (11:10:51.909)
	Trace[1011490971]: ---"Transaction prepared" 140ms (11:10:52.049)
	Trace[1011490971]: [611.226517ms] [611.226517ms] END
	I0318 11:10:59.085487       1 trace.go:236] Trace[1513453068]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:b8736b86-6941-449c-ab40-af4812c831f4,client:172.30.141.150,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/gadget/events/gadget-2h86p.17bdd75e7bd18899,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (18-Mar-2024 11:10:58.288) (total time: 796ms):
	Trace[1513453068]: ["GuaranteedUpdate etcd3" audit-id:b8736b86-6941-449c-ab40-af4812c831f4,key:/events/gadget/gadget-2h86p.17bdd75e7bd18899,type:*core.Event,resource:events 796ms (11:10:58.288)
	Trace[1513453068]:  ---"initial value restored" 59ms (11:10:58.347)
	Trace[1513453068]:  ---"Transaction prepared" 346ms (11:10:58.698)
	Trace[1513453068]:  ---"Txn call completed" 386ms (11:10:59.085)]
	Trace[1513453068]: ---"Object stored in database" 733ms (11:10:59.085)
	Trace[1513453068]: [796.914265ms] [796.914265ms] END
	I0318 11:11:30.654456       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.170.43"}
	I0318 11:11:35.318851       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0318 11:11:35.343406       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0318 11:11:36.387732       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0318 11:11:41.429393       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0318 11:11:41.915187       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.206.255"}
	I0318 11:11:47.635774       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 11:11:55.906967       1 trace.go:236] Trace[1387544598]: "Update" accept:application/json, */*,audit-id:1b2a689e-8539-4faf-9004-c2d35cd68731,client:10.244.0.21,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/ingress-nginx/leases/ingress-nginx-leader,user-agent:nginx-ingress-controller/v1.10.0 (linux/amd64) ingress-nginx/71f78d49f0a496c31d4c19f095469f3f23900f8a,verb:PUT (18-Mar-2024 11:11:55.360) (total time: 546ms):
	Trace[1387544598]: ["GuaranteedUpdate etcd3" audit-id:1b2a689e-8539-4faf-9004-c2d35cd68731,key:/leases/ingress-nginx/ingress-nginx-leader,type:*coordination.Lease,resource:leases.coordination.k8s.io 546ms (11:11:55.360)
	Trace[1387544598]:  ---"Txn call completed" 545ms (11:11:55.906)]
	Trace[1387544598]: [546.592452ms] [546.592452ms] END
	
	
	==> kube-controller-manager [ecd6c9cd6be4] <==
	I0318 11:11:30.844751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5485c556b" duration="32.600006ms"
	I0318 11:11:30.845627       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5485c556b" duration="76.202µs"
	I0318 11:11:30.852846       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5485c556b" duration="37.901µs"
	I0318 11:11:35.753315       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 11:11:35.753662       1 shared_informer.go:318] Caches are synced for garbage collector
	E0318 11:11:36.390518       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0318 11:11:37.386332       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 11:11:37.386467       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0318 11:11:39.347961       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 11:11:39.348224       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0318 11:11:42.502039       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="9.301µs"
	I0318 11:11:43.518755       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W0318 11:11:44.392351       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 11:11:44.392387       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0318 11:11:45.605271       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	I0318 11:11:46.438663       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="10µs"
	I0318 11:11:49.871583       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W0318 11:11:51.146922       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 11:11:51.146957       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0318 11:11:52.528355       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0318 11:11:57.909908       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5485c556b" duration="76.802µs"
	I0318 11:11:57.963582       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5485c556b" duration="22.758945ms"
	I0318 11:11:57.964359       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-5485c556b" duration="33.8µs"
	I0318 11:12:05.300332       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 11:12:05.300369       1 shared_informer.go:318] Caches are synced for resource quota
	
	
	==> kube-proxy [f128e7d298ac] <==
	I0318 11:08:14.544584       1 server_others.go:69] "Using iptables proxy"
	I0318 11:08:15.607130       1 node.go:141] Successfully retrieved node IP: 172.30.141.150
	I0318 11:08:16.015894       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 11:08:16.015933       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 11:08:16.025523       1 server_others.go:152] "Using iptables Proxier"
	I0318 11:08:16.026159       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 11:08:16.028408       1 server.go:846] "Version info" version="v1.28.4"
	I0318 11:08:16.028601       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 11:08:16.033188       1 config.go:188] "Starting service config controller"
	I0318 11:08:16.033448       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 11:08:16.033750       1 config.go:97] "Starting endpoint slice config controller"
	I0318 11:08:16.033932       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 11:08:16.039513       1 config.go:315] "Starting node config controller"
	I0318 11:08:16.039811       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 11:08:16.134365       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 11:08:16.134438       1 shared_informer.go:318] Caches are synced for service config
	I0318 11:08:16.140137       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d497388cccb1] <==
	W0318 11:07:48.782052       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 11:07:48.782130       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 11:07:48.923744       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 11:07:48.923899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 11:07:48.941020       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 11:07:48.941093       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 11:07:49.077779       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 11:07:49.078156       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 11:07:49.137424       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 11:07:49.138309       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 11:07:49.168614       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 11:07:49.168642       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 11:07:49.179310       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 11:07:49.179337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 11:07:49.208119       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 11:07:49.208355       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 11:07:49.218961       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 11:07:49.219153       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 11:07:49.235455       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 11:07:49.235586       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 11:07:49.288187       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 11:07:49.288211       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 11:07:49.421407       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 11:07:49.421441       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 11:07:52.026591       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 11:11:53 addons-209500 kubelet[2915]: I0318 11:11:53.032033    2915 topology_manager.go:215] "Topology Admit Handler" podUID="d7b2728f-eeb8-489c-81fa-8c369278d84f" podNamespace="default" podName="task-pv-pod"
	Mar 18 11:11:53 addons-209500 kubelet[2915]: E0318 11:11:53.038221    2915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="39d3e807-26d5-411d-b2a3-11f2d029f106" containerName="registry-proxy"
	Mar 18 11:11:53 addons-209500 kubelet[2915]: E0318 11:11:53.038373    2915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4b346270-6ab9-491b-9c70-fe7c824d70ee" containerName="gadget"
	Mar 18 11:11:53 addons-209500 kubelet[2915]: E0318 11:11:53.038716    2915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b15348ce-750d-45d2-a9c4-bb54040d40dc" containerName="registry"
	Mar 18 11:11:53 addons-209500 kubelet[2915]: E0318 11:11:53.039329    2915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="60cda2b1-a5f7-4ce8-b9ef-0ed5074d1788" containerName="tiller"
	Mar 18 11:11:53 addons-209500 kubelet[2915]: I0318 11:11:53.042928    2915 memory_manager.go:346] "RemoveStaleState removing state" podUID="60cda2b1-a5f7-4ce8-b9ef-0ed5074d1788" containerName="tiller"
	Mar 18 11:11:53 addons-209500 kubelet[2915]: I0318 11:11:53.042964    2915 memory_manager.go:346] "RemoveStaleState removing state" podUID="39d3e807-26d5-411d-b2a3-11f2d029f106" containerName="registry-proxy"
	Mar 18 11:11:53 addons-209500 kubelet[2915]: I0318 11:11:53.042978    2915 memory_manager.go:346] "RemoveStaleState removing state" podUID="4b346270-6ab9-491b-9c70-fe7c824d70ee" containerName="gadget"
	Mar 18 11:11:53 addons-209500 kubelet[2915]: I0318 11:11:53.042998    2915 memory_manager.go:346] "RemoveStaleState removing state" podUID="b15348ce-750d-45d2-a9c4-bb54040d40dc" containerName="registry"
	Mar 18 11:11:53 addons-209500 kubelet[2915]: E0318 11:11:53.223181    2915 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 11:11:53 addons-209500 kubelet[2915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 11:11:53 addons-209500 kubelet[2915]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 11:11:53 addons-209500 kubelet[2915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 11:11:53 addons-209500 kubelet[2915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 11:11:53 addons-209500 kubelet[2915]: I0318 11:11:53.225617    2915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d7b2728f-eeb8-489c-81fa-8c369278d84f-gcp-creds\") pod \"task-pv-pod\" (UID: \"d7b2728f-eeb8-489c-81fa-8c369278d84f\") " pod="default/task-pv-pod"
	Mar 18 11:11:53 addons-209500 kubelet[2915]: I0318 11:11:53.225669    2915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dbd5616c-c2f7-4dea-b8ec-9865fec4a261\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^52c33497-e518-11ee-a1b0-62197980c6ab\") pod \"task-pv-pod\" (UID: \"d7b2728f-eeb8-489c-81fa-8c369278d84f\") " pod="default/task-pv-pod"
	Mar 18 11:11:53 addons-209500 kubelet[2915]: I0318 11:11:53.225700    2915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnnm9\" (UniqueName: \"kubernetes.io/projected/d7b2728f-eeb8-489c-81fa-8c369278d84f-kube-api-access-dnnm9\") pod \"task-pv-pod\" (UID: \"d7b2728f-eeb8-489c-81fa-8c369278d84f\") " pod="default/task-pv-pod"
	Mar 18 11:11:53 addons-209500 kubelet[2915]: I0318 11:11:53.358371    2915 operation_generator.go:665] "MountVolume.MountDevice succeeded for volume \"pvc-dbd5616c-c2f7-4dea-b8ec-9865fec4a261\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^52c33497-e518-11ee-a1b0-62197980c6ab\") pod \"task-pv-pod\" (UID: \"d7b2728f-eeb8-489c-81fa-8c369278d84f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/a6a90976690de995abad29eb08d85b52b2a564bcbb593e8e0c497303f35247c6/globalmount\"" pod="default/task-pv-pod"
	Mar 18 11:11:53 addons-209500 kubelet[2915]: I0318 11:11:53.885685    2915 scope.go:117] "RemoveContainer" containerID="fe68b61dc47b92d65311b4500a3b946f45fb08eff0b16dbce5225b96b4aefe97"
	Mar 18 11:11:54 addons-209500 kubelet[2915]: I0318 11:11:54.861783    2915 scope.go:117] "RemoveContainer" containerID="9d152a442b814a2accedb6ed61c461fe53171fc4bbde5e7e3a5f3f8730536899"
	Mar 18 11:11:55 addons-209500 kubelet[2915]: I0318 11:11:55.706131    2915 scope.go:117] "RemoveContainer" containerID="3435715878f5aa7ba31aa82d4855f08c634e5648d8284758724e304c60859a63"
	Mar 18 11:11:56 addons-209500 kubelet[2915]: I0318 11:11:56.173910    2915 kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-kb5lj" secret="" err="secret \"gcp-auth\" not found"
	Mar 18 11:11:56 addons-209500 kubelet[2915]: I0318 11:11:56.229250    2915 scope.go:117] "RemoveContainer" containerID="f45b048b93f405c1aea1fd08f90d2ac6ee0ad004017a715bc742c60269968354"
	Mar 18 11:11:56 addons-209500 kubelet[2915]: I0318 11:11:56.814219    2915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7add3a818fff04afe071b10b22006c9b8a9c28a094a03bea47884dc504d6c30"
	Mar 18 11:11:57 addons-209500 kubelet[2915]: I0318 11:11:57.935596    2915 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="headlamp/headlamp-5485c556b-h2gvx" podStartSLOduration=3.28642849 podCreationTimestamp="2024-03-18 11:11:30 +0000 UTC" firstStartedPulling="2024-03-18 11:11:31.689221022 +0000 UTC m=+218.826209406" lastFinishedPulling="2024-03-18 11:11:56.338346394 +0000 UTC m=+243.475334778" observedRunningTime="2024-03-18 11:11:57.907662294 +0000 UTC m=+245.044650678" watchObservedRunningTime="2024-03-18 11:11:57.935553862 +0000 UTC m=+245.072542346"
	
	
	==> storage-provisioner [ac57dd1f8419] <==
	I0318 11:08:41.240450       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 11:08:41.297484       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 11:08:41.297527       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 11:08:41.328055       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 11:08:41.328358       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-209500_dec5ed7e-ef6e-4420-aa1e-de5728f2ca4a!
	I0318 11:08:41.328402       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56fc82f7-8a79-453b-8d1c-aba684050280", APIVersion:"v1", ResourceVersion:"778", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-209500_dec5ed7e-ef6e-4420-aa1e-de5728f2ca4a became leader
	I0318 11:08:41.429548       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-209500_dec5ed7e-ef6e-4420-aa1e-de5728f2ca4a!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:11:59.139836    2588 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-209500 -n addons-209500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-209500 -n addons-209500: (13.9824058s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-209500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-5d77478584-jrj6f ingress-nginx-admission-create-25xxj ingress-nginx-admission-patch-7zdkj
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-209500 describe pod hello-world-app-5d77478584-jrj6f ingress-nginx-admission-create-25xxj ingress-nginx-admission-patch-7zdkj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-209500 describe pod hello-world-app-5d77478584-jrj6f ingress-nginx-admission-create-25xxj ingress-nginx-admission-patch-7zdkj: exit status 1 (385.718ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d77478584-jrj6f
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-209500/172.30.141.150
	Start Time:       Mon, 18 Mar 2024 11:12:20 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d77478584
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d77478584
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          gcr.io/google-samples/hello-app:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n4kjp (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-n4kjp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  4s    default-scheduler  Successfully assigned default/hello-world-app-5d77478584-jrj6f to addons-209500
	  Normal  Pulling    2s    kubelet            Pulling image "gcr.io/google-samples/hello-app:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-25xxj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-7zdkj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-209500 describe pod hello-world-app-5d77478584-jrj6f ingress-nginx-admission-create-25xxj ingress-nginx-admission-patch-7zdkj: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.26s)

                                                
                                    
x
+
TestErrorSpam/setup (208.4s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-852500 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 --driver=hyperv
E0318 11:16:13.022080   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
E0318 11:16:13.042911   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
E0318 11:16:13.064279   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
E0318 11:16:13.092170   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
E0318 11:16:13.134365   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
E0318 11:16:13.218403   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
E0318 11:16:13.386832   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
E0318 11:16:13.711219   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
E0318 11:16:14.364090   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
E0318 11:16:15.646353   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
E0318 11:16:18.219502   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
E0318 11:16:23.341647   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
E0318 11:16:33.586067   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
E0318 11:16:54.080328   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
E0318 11:17:35.049932   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
E0318 11:18:56.980666   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p nospam-852500 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 --driver=hyperv: exit status 90 (3m28.3787625s)

                                                
                                                
-- stdout --
	* [nospam-852500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "nospam-852500" primary control-plane node in "nospam-852500" cluster
	* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:16:11.070989    5420 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Mar 18 11:18:06 nospam-852500 systemd[1]: Starting Docker Application Container Engine...
	Mar 18 11:18:06 nospam-852500 dockerd[658]: time="2024-03-18T11:18:06.923019847Z" level=info msg="Starting up"
	Mar 18 11:18:06 nospam-852500 dockerd[658]: time="2024-03-18T11:18:06.923991887Z" level=info msg="containerd not running, starting managed containerd"
	Mar 18 11:18:06 nospam-852500 dockerd[658]: time="2024-03-18T11:18:06.925689682Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=664
	Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.955947314Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.987560862Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.987639057Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.987730652Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.987850144Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.987962737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.988067731Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.988389611Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.988488205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.988511903Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.988523303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.988685793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.989037171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.991930392Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.992113681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.992475559Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.992574053Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.992695345Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.992839536Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.992931731Z" level=info msg="metadata content store policy set" policy=shared
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.017416582Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.017570173Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.017651068Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.017749262Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.017771061Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.017888554Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018418424Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018568115Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018670009Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018689708Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018710707Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018727006Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018740105Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018755904Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018772303Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018785102Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018796602Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018809401Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018831600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018847299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018859098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018873597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018885397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018899996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018911995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018923994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018936494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018953593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018965492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018976491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018989491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019004790Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019025289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019037688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019048187Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019092085Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019111384Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019122283Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019135282Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019233376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019301573Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019316972Z" level=info msg="NRI interface is disabled by configuration."
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019545758Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019595456Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019669551Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019708949Z" level=info msg="containerd successfully booted in 0.064783s"
	Mar 18 11:18:08 nospam-852500 dockerd[658]: time="2024-03-18T11:18:08.196576167Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 18 11:18:08 nospam-852500 dockerd[658]: time="2024-03-18T11:18:08.227032814Z" level=info msg="Loading containers: start."
	Mar 18 11:18:08 nospam-852500 dockerd[658]: time="2024-03-18T11:18:08.491026499Z" level=info msg="Loading containers: done."
	Mar 18 11:18:08 nospam-852500 dockerd[658]: time="2024-03-18T11:18:08.513029287Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	Mar 18 11:18:08 nospam-852500 dockerd[658]: time="2024-03-18T11:18:08.513217786Z" level=info msg="Daemon has completed initialization"
	Mar 18 11:18:08 nospam-852500 dockerd[658]: time="2024-03-18T11:18:08.617084789Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 18 11:18:08 nospam-852500 dockerd[658]: time="2024-03-18T11:18:08.617176888Z" level=info msg="API listen on [::]:2376"
	Mar 18 11:18:08 nospam-852500 systemd[1]: Started Docker Application Container Engine.
	Mar 18 11:18:38 nospam-852500 systemd[1]: Stopping Docker Application Container Engine...
	Mar 18 11:18:38 nospam-852500 dockerd[658]: time="2024-03-18T11:18:38.125378834Z" level=info msg="Processing signal 'terminated'"
	Mar 18 11:18:38 nospam-852500 dockerd[658]: time="2024-03-18T11:18:38.127307420Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Mar 18 11:18:38 nospam-852500 dockerd[658]: time="2024-03-18T11:18:38.127516430Z" level=info msg="Daemon shutdown complete"
	Mar 18 11:18:38 nospam-852500 dockerd[658]: time="2024-03-18T11:18:38.127685037Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Mar 18 11:18:38 nospam-852500 dockerd[658]: time="2024-03-18T11:18:38.127844544Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Mar 18 11:18:39 nospam-852500 systemd[1]: docker.service: Deactivated successfully.
	Mar 18 11:18:39 nospam-852500 systemd[1]: Stopped Docker Application Container Engine.
	Mar 18 11:18:39 nospam-852500 systemd[1]: Starting Docker Application Container Engine...
	Mar 18 11:18:39 nospam-852500 dockerd[1006]: time="2024-03-18T11:18:39.208990122Z" level=info msg="Starting up"
	Mar 18 11:19:39 nospam-852500 dockerd[1006]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Mar 18 11:19:39 nospam-852500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Mar 18 11:19:39 nospam-852500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Mar 18 11:19:39 nospam-852500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-windows-amd64.exe start -p nospam-852500 -n=1 --memory=2250 --wait=false --log_dir=C:\\Users\\jenkins.minikube3\\AppData\\Local\\Temp\\nospam-852500 --driver=hyperv" failed: exit status 90
error_spam_test.go:96: unexpected stderr: "W0318 11:16:11.070989    5420 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:96: unexpected stderr: "X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "Job for docker.service failed because the control process exited with error code."
error_spam_test.go:96: unexpected stderr: "See \"systemctl status docker.service\" and \"journalctl -xeu docker.service\" for details."
error_spam_test.go:96: unexpected stderr: "sudo journalctl --no-pager -u docker:"
error_spam_test.go:96: unexpected stderr: "-- stdout --"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 systemd[1]: Starting Docker Application Container Engine..."
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[658]: time=\"2024-03-18T11:18:06.923019847Z\" level=info msg=\"Starting up\""
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[658]: time=\"2024-03-18T11:18:06.923991887Z\" level=info msg=\"containerd not running, starting managed containerd\""
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[658]: time=\"2024-03-18T11:18:06.925689682Z\" level=info msg=\"started new containerd process\" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=664"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:06.955947314Z\" level=info msg=\"starting containerd\" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:06.987560862Z\" level=info msg=\"loading plugin \\\"io.containerd.event.v1.exchange\\\"...\" type=io.containerd.event.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:06.987639057Z\" level=info msg=\"loading plugin \\\"io.containerd.internal.v1.opt\\\"...\" type=io.containerd.internal.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:06.987730652Z\" level=info msg=\"loading plugin \\\"io.containerd.warning.v1.deprecations\\\"...\" type=io.containerd.warning.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:06.987850144Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.blockfile\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:06.987962737Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.blockfile\\\"...\" error=\"no scratch file generator: skip plugin\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:06.988067731Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.btrfs\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:06.988389611Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.btrfs\\\"...\" error=\"path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:06.988488205Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.devmapper\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:06.988511903Z\" level=warning msg=\"failed to load plugin io.containerd.snapshotter.v1.devmapper\" error=\"devmapper not configured\""
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:06.988523303Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.native\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:06.988685793Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.overlayfs\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:06.989037171Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.aufs\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:06.991930392Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.aufs\\\"...\" error=\"aufs is not supported (modprobe aufs failed: exit status 1 \\\"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\\\n\\\"): skip plugin\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:06.992113681Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.zfs\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:06.992475559Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.zfs\\\"...\" error=\"path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:06.992574053Z\" level=info msg=\"loading plugin \\\"io.containerd.content.v1.content\\\"...\" type=io.containerd.content.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:06.992695345Z\" level=info msg=\"loading plugin \\\"io.containerd.metadata.v1.bolt\\\"...\" type=io.containerd.metadata.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:06.992839536Z\" level=warning msg=\"could not use snapshotter devmapper in metadata plugin\" error=\"devmapper not configured\""
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:06 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:06.992931731Z\" level=info msg=\"metadata content store policy set\" policy=shared"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.017416582Z\" level=info msg=\"loading plugin \\\"io.containerd.gc.v1.scheduler\\\"...\" type=io.containerd.gc.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.017570173Z\" level=info msg=\"loading plugin \\\"io.containerd.differ.v1.walking\\\"...\" type=io.containerd.differ.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.017651068Z\" level=info msg=\"loading plugin \\\"io.containerd.lease.v1.manager\\\"...\" type=io.containerd.lease.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.017749262Z\" level=info msg=\"loading plugin \\\"io.containerd.streaming.v1.manager\\\"...\" type=io.containerd.streaming.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.017771061Z\" level=info msg=\"loading plugin \\\"io.containerd.runtime.v1.linux\\\"...\" type=io.containerd.runtime.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.017888554Z\" level=info msg=\"loading plugin \\\"io.containerd.monitor.v1.cgroups\\\"...\" type=io.containerd.monitor.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018418424Z\" level=info msg=\"loading plugin \\\"io.containerd.runtime.v2.task\\\"...\" type=io.containerd.runtime.v2"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018568115Z\" level=info msg=\"loading plugin \\\"io.containerd.runtime.v2.shim\\\"...\" type=io.containerd.runtime.v2"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018670009Z\" level=info msg=\"loading plugin \\\"io.containerd.sandbox.store.v1.local\\\"...\" type=io.containerd.sandbox.store.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018689708Z\" level=info msg=\"loading plugin \\\"io.containerd.sandbox.controller.v1.local\\\"...\" type=io.containerd.sandbox.controller.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018710707Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.containers-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018727006Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.content-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018740105Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.diff-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018755904Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.images-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018772303Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.introspection-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018785102Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.namespaces-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018796602Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.snapshots-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018809401Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.tasks-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018831600Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.containers\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018847299Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.content\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018859098Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.diff\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018873597Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.events\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018885397Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.images\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018899996Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.introspection\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018911995Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.leases\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018923994Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.namespaces\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018936494Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.sandbox-controllers\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018953593Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.sandboxes\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018965492Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.snapshots\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018976491Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.streaming\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.018989491Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.tasks\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.019004790Z\" level=info msg=\"loading plugin \\\"io.containerd.transfer.v1.local\\\"...\" type=io.containerd.transfer.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.019025289Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.transfer\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.019037688Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.version\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.019048187Z\" level=info msg=\"loading plugin \\\"io.containerd.internal.v1.restart\\\"...\" type=io.containerd.internal.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.019092085Z\" level=info msg=\"loading plugin \\\"io.containerd.tracing.processor.v1.otlp\\\"...\" type=io.containerd.tracing.processor.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.019111384Z\" level=info msg=\"skip loading plugin \\\"io.containerd.tracing.processor.v1.otlp\\\"...\" error=\"no OpenTelemetry endpoint: skip plugin\" type=io.containerd.tracing.processor.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.019122283Z\" level=info msg=\"loading plugin \\\"io.containerd.internal.v1.tracing\\\"...\" type=io.containerd.internal.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.019135282Z\" level=info msg=\"skipping tracing processor initialization (no tracing plugin)\" error=\"no OpenTelemetry endpoint: skip plugin\""
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.019233376Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.healthcheck\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.019301573Z\" level=info msg=\"loading plugin \\\"io.containerd.nri.v1.nri\\\"...\" type=io.containerd.nri.v1"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.019316972Z\" level=info msg=\"NRI interface is disabled by configuration.\""
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.019545758Z\" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.019595456Z\" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.019669551Z\" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:07 nospam-852500 dockerd[664]: time=\"2024-03-18T11:18:07.019708949Z\" level=info msg=\"containerd successfully booted in 0.064783s\""
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:08 nospam-852500 dockerd[658]: time=\"2024-03-18T11:18:08.196576167Z\" level=info msg=\"[graphdriver] trying configured driver: overlay2\""
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:08 nospam-852500 dockerd[658]: time=\"2024-03-18T11:18:08.227032814Z\" level=info msg=\"Loading containers: start.\""
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:08 nospam-852500 dockerd[658]: time=\"2024-03-18T11:18:08.491026499Z\" level=info msg=\"Loading containers: done.\""
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:08 nospam-852500 dockerd[658]: time=\"2024-03-18T11:18:08.513029287Z\" level=info msg=\"Docker daemon\" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:08 nospam-852500 dockerd[658]: time=\"2024-03-18T11:18:08.513217786Z\" level=info msg=\"Daemon has completed initialization\""
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:08 nospam-852500 dockerd[658]: time=\"2024-03-18T11:18:08.617084789Z\" level=info msg=\"API listen on /var/run/docker.sock\""
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:08 nospam-852500 dockerd[658]: time=\"2024-03-18T11:18:08.617176888Z\" level=info msg=\"API listen on [::]:2376\""
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:08 nospam-852500 systemd[1]: Started Docker Application Container Engine."
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:38 nospam-852500 systemd[1]: Stopping Docker Application Container Engine..."
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:38 nospam-852500 dockerd[658]: time=\"2024-03-18T11:18:38.125378834Z\" level=info msg=\"Processing signal 'terminated'\""
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:38 nospam-852500 dockerd[658]: time=\"2024-03-18T11:18:38.127307420Z\" level=info msg=\"stopping event stream following graceful shutdown\" error=\"<nil>\" module=libcontainerd namespace=moby"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:38 nospam-852500 dockerd[658]: time=\"2024-03-18T11:18:38.127516430Z\" level=info msg=\"Daemon shutdown complete\""
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:38 nospam-852500 dockerd[658]: time=\"2024-03-18T11:18:38.127685037Z\" level=info msg=\"stopping healthcheck following graceful shutdown\" module=libcontainerd"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:38 nospam-852500 dockerd[658]: time=\"2024-03-18T11:18:38.127844544Z\" level=info msg=\"stopping event stream following graceful shutdown\" error=\"context canceled\" module=libcontainerd namespace=plugins.moby"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:39 nospam-852500 systemd[1]: docker.service: Deactivated successfully."
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:39 nospam-852500 systemd[1]: Stopped Docker Application Container Engine."
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:39 nospam-852500 systemd[1]: Starting Docker Application Container Engine..."
error_spam_test.go:96: unexpected stderr: "Mar 18 11:18:39 nospam-852500 dockerd[1006]: time=\"2024-03-18T11:18:39.208990122Z\" level=info msg=\"Starting up\""
error_spam_test.go:96: unexpected stderr: "Mar 18 11:19:39 nospam-852500 dockerd[1006]: failed to start daemon: failed to dial \"/run/containerd/containerd.sock\": failed to dial \"/run/containerd/containerd.sock\": context deadline exceeded"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:19:39 nospam-852500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE"
error_spam_test.go:96: unexpected stderr: "Mar 18 11:19:39 nospam-852500 systemd[1]: docker.service: Failed with result 'exit-code'."
error_spam_test.go:96: unexpected stderr: "Mar 18 11:19:39 nospam-852500 systemd[1]: Failed to start Docker Application Container Engine."
error_spam_test.go:96: unexpected stderr: "-- /stdout --"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-852500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
- KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
- MINIKUBE_LOCATION=18429
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-852500" primary control-plane node in "nospam-852500" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
W0318 11:16:11.070989    5420 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Mar 18 11:18:06 nospam-852500 systemd[1]: Starting Docker Application Container Engine...
Mar 18 11:18:06 nospam-852500 dockerd[658]: time="2024-03-18T11:18:06.923019847Z" level=info msg="Starting up"
Mar 18 11:18:06 nospam-852500 dockerd[658]: time="2024-03-18T11:18:06.923991887Z" level=info msg="containerd not running, starting managed containerd"
Mar 18 11:18:06 nospam-852500 dockerd[658]: time="2024-03-18T11:18:06.925689682Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=664
Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.955947314Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.987560862Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.987639057Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.987730652Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.987850144Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.987962737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.988067731Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.988389611Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.988488205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.988511903Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.988523303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.988685793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.989037171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.991930392Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.992113681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.992475559Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.992574053Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.992695345Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.992839536Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Mar 18 11:18:06 nospam-852500 dockerd[664]: time="2024-03-18T11:18:06.992931731Z" level=info msg="metadata content store policy set" policy=shared
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.017416582Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.017570173Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.017651068Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.017749262Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.017771061Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.017888554Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018418424Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018568115Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018670009Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018689708Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018710707Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018727006Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018740105Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018755904Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018772303Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018785102Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018796602Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018809401Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018831600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018847299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018859098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018873597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018885397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018899996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018911995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018923994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018936494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018953593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018965492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018976491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.018989491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019004790Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019025289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019037688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019048187Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019092085Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019111384Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019122283Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019135282Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019233376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019301573Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019316972Z" level=info msg="NRI interface is disabled by configuration."
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019545758Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019595456Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019669551Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Mar 18 11:18:07 nospam-852500 dockerd[664]: time="2024-03-18T11:18:07.019708949Z" level=info msg="containerd successfully booted in 0.064783s"
Mar 18 11:18:08 nospam-852500 dockerd[658]: time="2024-03-18T11:18:08.196576167Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Mar 18 11:18:08 nospam-852500 dockerd[658]: time="2024-03-18T11:18:08.227032814Z" level=info msg="Loading containers: start."
Mar 18 11:18:08 nospam-852500 dockerd[658]: time="2024-03-18T11:18:08.491026499Z" level=info msg="Loading containers: done."
Mar 18 11:18:08 nospam-852500 dockerd[658]: time="2024-03-18T11:18:08.513029287Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
Mar 18 11:18:08 nospam-852500 dockerd[658]: time="2024-03-18T11:18:08.513217786Z" level=info msg="Daemon has completed initialization"
Mar 18 11:18:08 nospam-852500 dockerd[658]: time="2024-03-18T11:18:08.617084789Z" level=info msg="API listen on /var/run/docker.sock"
Mar 18 11:18:08 nospam-852500 dockerd[658]: time="2024-03-18T11:18:08.617176888Z" level=info msg="API listen on [::]:2376"
Mar 18 11:18:08 nospam-852500 systemd[1]: Started Docker Application Container Engine.
Mar 18 11:18:38 nospam-852500 systemd[1]: Stopping Docker Application Container Engine...
Mar 18 11:18:38 nospam-852500 dockerd[658]: time="2024-03-18T11:18:38.125378834Z" level=info msg="Processing signal 'terminated'"
Mar 18 11:18:38 nospam-852500 dockerd[658]: time="2024-03-18T11:18:38.127307420Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Mar 18 11:18:38 nospam-852500 dockerd[658]: time="2024-03-18T11:18:38.127516430Z" level=info msg="Daemon shutdown complete"
Mar 18 11:18:38 nospam-852500 dockerd[658]: time="2024-03-18T11:18:38.127685037Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Mar 18 11:18:38 nospam-852500 dockerd[658]: time="2024-03-18T11:18:38.127844544Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Mar 18 11:18:39 nospam-852500 systemd[1]: docker.service: Deactivated successfully.
Mar 18 11:18:39 nospam-852500 systemd[1]: Stopped Docker Application Container Engine.
Mar 18 11:18:39 nospam-852500 systemd[1]: Starting Docker Application Container Engine...
Mar 18 11:18:39 nospam-852500 dockerd[1006]: time="2024-03-18T11:18:39.208990122Z" level=info msg="Starting up"
Mar 18 11:19:39 nospam-852500 dockerd[1006]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Mar 18 11:19:39 nospam-852500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Mar 18 11:19:39 nospam-852500 systemd[1]: docker.service: Failed with result 'exit-code'.
Mar 18 11:19:39 nospam-852500 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (208.40s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (31.51s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-611000 -n functional-611000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-611000 -n functional-611000: (11.3925556s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 logs -n 25: (7.9645861s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-852500 --log_dir                                     | nospam-852500     | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:20 UTC |                     |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-852500 --log_dir                                     | nospam-852500     | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:20 UTC |                     |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-852500 --log_dir                                     | nospam-852500     | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:21 UTC |                     |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-852500 --log_dir                                     | nospam-852500     | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:22 UTC |                     |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-852500 --log_dir                                     | nospam-852500     | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:23 UTC | 18 Mar 24 11:24 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-852500 --log_dir                                     | nospam-852500     | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:24 UTC | 18 Mar 24 11:24 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-852500 --log_dir                                     | nospam-852500     | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:24 UTC | 18 Mar 24 11:25 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-852500                                            | nospam-852500     | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:25 UTC |
	| start   | -p functional-611000                                        | functional-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:25 UTC | 18 Mar 24 11:28 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-611000                                        | functional-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:28 UTC | 18 Mar 24 11:30 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-611000 cache add                                 | functional-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:30 UTC | 18 Mar 24 11:31 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-611000 cache add                                 | functional-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:31 UTC | 18 Mar 24 11:31 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-611000 cache add                                 | functional-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:31 UTC | 18 Mar 24 11:31 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-611000 cache add                                 | functional-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:31 UTC | 18 Mar 24 11:31 UTC |
	|         | minikube-local-cache-test:functional-611000                 |                   |                   |         |                     |                     |
	| cache   | functional-611000 cache delete                              | functional-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:31 UTC | 18 Mar 24 11:31 UTC |
	|         | minikube-local-cache-test:functional-611000                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:31 UTC | 18 Mar 24 11:31 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:31 UTC | 18 Mar 24 11:31 UTC |
	| ssh     | functional-611000 ssh sudo                                  | functional-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:31 UTC | 18 Mar 24 11:31 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-611000                                           | functional-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:31 UTC | 18 Mar 24 11:31 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-611000 ssh                                       | functional-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:31 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-611000 cache reload                              | functional-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:31 UTC | 18 Mar 24 11:32 UTC |
	| ssh     | functional-611000 ssh                                       | functional-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:32 UTC | 18 Mar 24 11:32 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:32 UTC | 18 Mar 24 11:32 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:32 UTC | 18 Mar 24 11:32 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-611000 kubectl --                                | functional-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:32 UTC | 18 Mar 24 11:32 UTC |
	|         | --context functional-611000                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 11:28:49
	Running on machine: minikube3
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 11:28:49.086862    4320 out.go:291] Setting OutFile to fd 940 ...
	I0318 11:28:49.088179    4320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:28:49.088321    4320 out.go:304] Setting ErrFile to fd 704...
	I0318 11:28:49.088485    4320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:28:49.113944    4320 out.go:298] Setting JSON to false
	I0318 11:28:49.116367    4320 start.go:129] hostinfo: {"hostname":"minikube3","uptime":309906,"bootTime":1710451423,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0318 11:28:49.116367    4320 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 11:28:49.120633    4320 out.go:177] * [functional-611000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0318 11:28:49.121345    4320 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 11:28:49.121345    4320 notify.go:220] Checking for updates...
	I0318 11:28:49.125908    4320 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 11:28:49.126758    4320 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0318 11:28:49.129345    4320 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 11:28:49.136522    4320 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 11:28:49.140740    4320 config.go:182] Loaded profile config "functional-611000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:28:49.140740    4320 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 11:28:54.254017    4320 out.go:177] * Using the hyperv driver based on existing profile
	I0318 11:28:54.258265    4320 start.go:297] selected driver: hyperv
	I0318 11:28:54.258383    4320 start.go:901] validating driver "hyperv" against &{Name:functional-611000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:functional-611000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.129.196 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:28:54.258383    4320 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 11:28:54.308768    4320 cni.go:84] Creating CNI manager for ""
	I0318 11:28:54.308768    4320 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 11:28:54.308768    4320 start.go:340] cluster config:
	{Name:functional-611000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-611000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.129.196 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:28:54.308768    4320 iso.go:125] acquiring lock: {Name:mkefa7a887b9de67c22e0418da52288a5b488924 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 11:28:54.312314    4320 out.go:177] * Starting "functional-611000" primary control-plane node in "functional-611000" cluster
	I0318 11:28:54.315694    4320 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:28:54.315694    4320 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 11:28:54.315694    4320 cache.go:56] Caching tarball of preloaded images
	I0318 11:28:54.316431    4320 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 11:28:54.316431    4320 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 11:28:54.317032    4320 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\config.json ...
	I0318 11:28:54.320360    4320 start.go:360] acquireMachinesLock for functional-611000: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 11:28:54.320360    4320 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-611000"
	I0318 11:28:54.320360    4320 start.go:96] Skipping create...Using existing machine configuration
	I0318 11:28:54.320360    4320 fix.go:54] fixHost starting: 
	I0318 11:28:54.321241    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
	I0318 11:28:56.956272    4320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:28:56.962400    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:28:56.962536    4320 fix.go:112] recreateIfNeeded on functional-611000: state=Running err=<nil>
	W0318 11:28:56.962604    4320 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 11:28:56.966486    4320 out.go:177] * Updating the running hyperv "functional-611000" VM ...
	I0318 11:28:56.969038    4320 machine.go:94] provisionDockerMachine start ...
	I0318 11:28:56.969038    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
	I0318 11:28:59.039593    4320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:28:59.039593    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:28:59.039593    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:29:01.484132    4320 main.go:141] libmachine: [stdout =====>] : 172.30.129.196
	
	I0318 11:29:01.494046    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:01.500060    4320 main.go:141] libmachine: Using SSH client type: native
	I0318 11:29:01.500186    4320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.196 22 <nil> <nil>}
	I0318 11:29:01.500186    4320 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 11:29:01.626014    4320 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-611000
	
	I0318 11:29:01.626014    4320 buildroot.go:166] provisioning hostname "functional-611000"
	I0318 11:29:01.626014    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
	I0318 11:29:03.729280    4320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:29:03.729280    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:03.733633    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:29:06.182164    4320 main.go:141] libmachine: [stdout =====>] : 172.30.129.196
	
	I0318 11:29:06.182164    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:06.199488    4320 main.go:141] libmachine: Using SSH client type: native
	I0318 11:29:06.199660    4320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.196 22 <nil> <nil>}
	I0318 11:29:06.199660    4320 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-611000 && echo "functional-611000" | sudo tee /etc/hostname
	I0318 11:29:06.357448    4320 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-611000
	
	I0318 11:29:06.357448    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
	I0318 11:29:08.409882    4320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:29:08.409882    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:08.416337    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:29:10.836172    4320 main.go:141] libmachine: [stdout =====>] : 172.30.129.196
	
	I0318 11:29:10.836172    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:10.850098    4320 main.go:141] libmachine: Using SSH client type: native
	I0318 11:29:10.850727    4320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.196 22 <nil> <nil>}
	I0318 11:29:10.850727    4320 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-611000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-611000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-611000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 11:29:10.980398    4320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 11:29:10.980398    4320 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0318 11:29:10.980398    4320 buildroot.go:174] setting up certificates
	I0318 11:29:10.980398    4320 provision.go:84] configureAuth start
	I0318 11:29:10.980398    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
	I0318 11:29:13.027587    4320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:29:13.027655    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:13.027655    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:29:15.471130    4320 main.go:141] libmachine: [stdout =====>] : 172.30.129.196
	
	I0318 11:29:15.471404    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:15.471615    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
	I0318 11:29:17.523022    4320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:29:17.523022    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:17.534207    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:29:19.995775    4320 main.go:141] libmachine: [stdout =====>] : 172.30.129.196
	
	I0318 11:29:19.995775    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:19.995908    4320 provision.go:143] copyHostCerts
	I0318 11:29:19.996252    4320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0318 11:29:19.996525    4320 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0318 11:29:19.996525    4320 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0318 11:29:19.996973    4320 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 11:29:19.998409    4320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0318 11:29:19.998748    4320 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0318 11:29:19.998840    4320 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0318 11:29:19.998919    4320 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 11:29:20.000241    4320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0318 11:29:20.000465    4320 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0318 11:29:20.000465    4320 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0318 11:29:20.000465    4320 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 11:29:20.001381    4320 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-611000 san=[127.0.0.1 172.30.129.196 functional-611000 localhost minikube]
	I0318 11:29:20.296352    4320 provision.go:177] copyRemoteCerts
	I0318 11:29:20.314161    4320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 11:29:20.314244    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
	I0318 11:29:22.369372    4320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:29:22.369416    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:22.369416    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:29:24.794929    4320 main.go:141] libmachine: [stdout =====>] : 172.30.129.196
	
	I0318 11:29:24.794929    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:24.803058    4320 sshutil.go:53] new ssh client: &{IP:172.30.129.196 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-611000\id_rsa Username:docker}
	I0318 11:29:24.900366    4320 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5861207s)
	I0318 11:29:24.900470    4320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 11:29:24.900691    4320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 11:29:24.943015    4320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 11:29:24.943428    4320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 11:29:24.987736    4320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 11:29:24.987736    4320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 11:29:25.034592    4320 provision.go:87] duration metric: took 14.0540215s to configureAuth
	I0318 11:29:25.034621    4320 buildroot.go:189] setting minikube options for container-runtime
	I0318 11:29:25.034621    4320 config.go:182] Loaded profile config "functional-611000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:29:25.034621    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
	I0318 11:29:27.077596    4320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:29:27.077596    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:27.089028    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:29:29.541240    4320 main.go:141] libmachine: [stdout =====>] : 172.30.129.196
	
	I0318 11:29:29.541240    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:29.558405    4320 main.go:141] libmachine: Using SSH client type: native
	I0318 11:29:29.559813    4320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.196 22 <nil> <nil>}
	I0318 11:29:29.559813    4320 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 11:29:29.691117    4320 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 11:29:29.691191    4320 buildroot.go:70] root file system type: tmpfs
	I0318 11:29:29.691472    4320 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 11:29:29.691558    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
	I0318 11:29:31.724768    4320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:29:31.731979    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:31.732021    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:29:34.185961    4320 main.go:141] libmachine: [stdout =====>] : 172.30.129.196
	
	I0318 11:29:34.185961    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:34.201806    4320 main.go:141] libmachine: Using SSH client type: native
	I0318 11:29:34.202043    4320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.196 22 <nil> <nil>}
	I0318 11:29:34.202043    4320 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 11:29:34.359297    4320 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 11:29:34.359297    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
	I0318 11:29:36.413544    4320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:29:36.424706    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:36.424706    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:29:38.850985    4320 main.go:141] libmachine: [stdout =====>] : 172.30.129.196
	
	I0318 11:29:38.861567    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:38.870185    4320 main.go:141] libmachine: Using SSH client type: native
	I0318 11:29:38.870709    4320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.196 22 <nil> <nil>}
	I0318 11:29:38.870709    4320 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 11:29:39.006845    4320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 11:29:39.006942    4320 machine.go:97] duration metric: took 42.0375933s to provisionDockerMachine
	I0318 11:29:39.006991    4320 start.go:293] postStartSetup for "functional-611000" (driver="hyperv")
	I0318 11:29:39.006991    4320 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 11:29:39.019988    4320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 11:29:39.019988    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
	I0318 11:29:41.052177    4320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:29:41.052177    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:41.064257    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:29:43.496441    4320 main.go:141] libmachine: [stdout =====>] : 172.30.129.196
	
	I0318 11:29:43.496441    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:43.505381    4320 sshutil.go:53] new ssh client: &{IP:172.30.129.196 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-611000\id_rsa Username:docker}
	I0318 11:29:43.613193    4320 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.593124s)
	I0318 11:29:43.624189    4320 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 11:29:43.627441    4320 command_runner.go:130] > NAME=Buildroot
	I0318 11:29:43.627441    4320 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0318 11:29:43.627441    4320 command_runner.go:130] > ID=buildroot
	I0318 11:29:43.627441    4320 command_runner.go:130] > VERSION_ID=2023.02.9
	I0318 11:29:43.627441    4320 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0318 11:29:43.627441    4320 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 11:29:43.627441    4320 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0318 11:29:43.633158    4320 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0318 11:29:43.633701    4320 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> 134242.pem in /etc/ssl/certs
	I0318 11:29:43.633701    4320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /etc/ssl/certs/134242.pem
	I0318 11:29:43.634624    4320 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\13424\hosts -> hosts in /etc/test/nested/copy/13424
	I0318 11:29:43.634624    4320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\13424\hosts -> /etc/test/nested/copy/13424/hosts
	I0318 11:29:43.648121    4320 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/13424
	I0318 11:29:43.665487    4320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /etc/ssl/certs/134242.pem (1708 bytes)
	I0318 11:29:43.709307    4320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\13424\hosts --> /etc/test/nested/copy/13424/hosts (40 bytes)
	I0318 11:29:43.754344    4320 start.go:296] duration metric: took 4.7473177s for postStartSetup
	I0318 11:29:43.754344    4320 fix.go:56] duration metric: took 49.4336176s for fixHost
	I0318 11:29:43.754344    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
	I0318 11:29:45.780450    4320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:29:45.780450    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:45.780450    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:29:48.244779    4320 main.go:141] libmachine: [stdout =====>] : 172.30.129.196
	
	I0318 11:29:48.255853    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:48.262351    4320 main.go:141] libmachine: Using SSH client type: native
	I0318 11:29:48.262968    4320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.196 22 <nil> <nil>}
	I0318 11:29:48.262968    4320 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 11:29:48.392310    4320 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710761388.388916741
	
	I0318 11:29:48.392390    4320 fix.go:216] guest clock: 1710761388.388916741
	I0318 11:29:48.392390    4320 fix.go:229] Guest: 2024-03-18 11:29:48.388916741 +0000 UTC Remote: 2024-03-18 11:29:43.7543441 +0000 UTC m=+54.844652601 (delta=4.634572641s)
	I0318 11:29:48.392390    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
	I0318 11:29:50.416994    4320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:29:50.429007    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:50.429202    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:29:52.884375    4320 main.go:141] libmachine: [stdout =====>] : 172.30.129.196
	
	I0318 11:29:52.884375    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:52.898615    4320 main.go:141] libmachine: Using SSH client type: native
	I0318 11:29:52.899227    4320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.196 22 <nil> <nil>}
	I0318 11:29:52.899227    4320 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710761388
	I0318 11:29:53.042670    4320 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 11:29:48 UTC 2024
	
	I0318 11:29:53.042670    4320 fix.go:236] clock set: Mon Mar 18 11:29:48 UTC 2024
	 (err=<nil>)
	I0318 11:29:53.042670    4320 start.go:83] releasing machines lock for "functional-611000", held for 58.7218757s
	I0318 11:29:53.043381    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
	I0318 11:29:55.107649    4320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:29:55.107649    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:55.107649    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:29:57.535274    4320 main.go:141] libmachine: [stdout =====>] : 172.30.129.196
	
	I0318 11:29:57.535274    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:57.549893    4320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 11:29:57.549893    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
	I0318 11:29:57.561763    4320 ssh_runner.go:195] Run: cat /version.json
	I0318 11:29:57.561763    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
	I0318 11:29:59.673056    4320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:29:59.678849    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:59.678849    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:29:59.684063    4320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:29:59.684397    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:29:59.684397    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:30:02.271442    4320 main.go:141] libmachine: [stdout =====>] : 172.30.129.196
	
	I0318 11:30:02.271493    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:30:02.271623    4320 sshutil.go:53] new ssh client: &{IP:172.30.129.196 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-611000\id_rsa Username:docker}
	I0318 11:30:02.297141    4320 main.go:141] libmachine: [stdout =====>] : 172.30.129.196
	
	I0318 11:30:02.297141    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:30:02.297141    4320 sshutil.go:53] new ssh client: &{IP:172.30.129.196 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-611000\id_rsa Username:docker}
	I0318 11:30:02.360382    4320 command_runner.go:130] > {"iso_version": "v1.32.1-1710520390-17991", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "3dd306d082737a9ddf335108b42c9fcb2ad84298"}
	I0318 11:30:02.360382    4320 ssh_runner.go:235] Completed: cat /version.json: (4.7985841s)
	I0318 11:30:02.377628    4320 ssh_runner.go:195] Run: systemctl --version
	I0318 11:30:02.438323    4320 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0318 11:30:02.439342    4320 command_runner.go:130] > systemd 252 (252)
	I0318 11:30:02.439385    4320 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0318 11:30:02.439385    4320 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8893161s)
	I0318 11:30:02.449454    4320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 11:30:02.453887    4320 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0318 11:30:02.459110    4320 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 11:30:02.473017    4320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 11:30:02.491780    4320 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0318 11:30:02.491780    4320 start.go:494] detecting cgroup driver to use...
	I0318 11:30:02.492471    4320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:30:02.529627    4320 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0318 11:30:02.544817    4320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 11:30:02.574876    4320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 11:30:02.592551    4320 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 11:30:02.608445    4320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 11:30:02.641194    4320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:30:02.672524    4320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 11:30:02.708150    4320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:30:02.744794    4320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 11:30:02.775825    4320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 11:30:02.808345    4320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 11:30:02.832789    4320 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0318 11:30:02.846551    4320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 11:30:02.875812    4320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:30:03.123143    4320 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 11:30:03.155313    4320 start.go:494] detecting cgroup driver to use...
	I0318 11:30:03.167356    4320 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 11:30:03.193344    4320 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0318 11:30:03.193413    4320 command_runner.go:130] > [Unit]
	I0318 11:30:03.193413    4320 command_runner.go:130] > Description=Docker Application Container Engine
	I0318 11:30:03.193413    4320 command_runner.go:130] > Documentation=https://docs.docker.com
	I0318 11:30:03.193413    4320 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0318 11:30:03.193413    4320 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0318 11:30:03.193413    4320 command_runner.go:130] > StartLimitBurst=3
	I0318 11:30:03.193413    4320 command_runner.go:130] > StartLimitIntervalSec=60
	I0318 11:30:03.193413    4320 command_runner.go:130] > [Service]
	I0318 11:30:03.193413    4320 command_runner.go:130] > Type=notify
	I0318 11:30:03.193413    4320 command_runner.go:130] > Restart=on-failure
	I0318 11:30:03.193413    4320 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0318 11:30:03.193413    4320 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0318 11:30:03.193413    4320 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0318 11:30:03.193413    4320 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0318 11:30:03.193413    4320 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0318 11:30:03.193413    4320 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0318 11:30:03.193413    4320 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0318 11:30:03.193413    4320 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0318 11:30:03.193413    4320 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0318 11:30:03.193413    4320 command_runner.go:130] > ExecStart=
	I0318 11:30:03.193413    4320 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0318 11:30:03.193413    4320 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0318 11:30:03.193413    4320 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0318 11:30:03.193413    4320 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0318 11:30:03.193413    4320 command_runner.go:130] > LimitNOFILE=infinity
	I0318 11:30:03.193413    4320 command_runner.go:130] > LimitNPROC=infinity
	I0318 11:30:03.193413    4320 command_runner.go:130] > LimitCORE=infinity
	I0318 11:30:03.193413    4320 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0318 11:30:03.193413    4320 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0318 11:30:03.193413    4320 command_runner.go:130] > TasksMax=infinity
	I0318 11:30:03.193413    4320 command_runner.go:130] > TimeoutStartSec=0
	I0318 11:30:03.193413    4320 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0318 11:30:03.193413    4320 command_runner.go:130] > Delegate=yes
	I0318 11:30:03.193413    4320 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0318 11:30:03.193413    4320 command_runner.go:130] > KillMode=process
	I0318 11:30:03.194014    4320 command_runner.go:130] > [Install]
	I0318 11:30:03.194219    4320 command_runner.go:130] > WantedBy=multi-user.target
	I0318 11:30:03.209278    4320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:30:03.244392    4320 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 11:30:03.289294    4320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:30:03.326852    4320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:30:03.348596    4320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:30:03.382108    4320 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0318 11:30:03.398332    4320 ssh_runner.go:195] Run: which cri-dockerd
	I0318 11:30:03.403599    4320 command_runner.go:130] > /usr/bin/cri-dockerd
	I0318 11:30:03.417996    4320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 11:30:03.434212    4320 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 11:30:03.479591    4320 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 11:30:03.719322    4320 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 11:30:03.947269    4320 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 11:30:03.947269    4320 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 11:30:03.992658    4320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:30:04.240311    4320 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 11:30:17.061760    4320 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.8213534s)
	I0318 11:30:17.073964    4320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 11:30:17.107448    4320 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0318 11:30:17.142591    4320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:30:17.175834    4320 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 11:30:17.343798    4320 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 11:30:17.508681    4320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:30:17.681701    4320 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 11:30:17.717651    4320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:30:17.761557    4320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:30:17.963323    4320 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 11:30:18.071663    4320 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 11:30:18.081389    4320 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 11:30:18.090202    4320 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0318 11:30:18.090912    4320 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0318 11:30:18.090912    4320 command_runner.go:130] > Device: 0,22	Inode: 1432        Links: 1
	I0318 11:30:18.090912    4320 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0318 11:30:18.090912    4320 command_runner.go:130] > Access: 2024-03-18 11:30:17.989957961 +0000
	I0318 11:30:18.091008    4320 command_runner.go:130] > Modify: 2024-03-18 11:30:17.989957961 +0000
	I0318 11:30:18.091008    4320 command_runner.go:130] > Change: 2024-03-18 11:30:17.993957270 +0000
	I0318 11:30:18.091008    4320 command_runner.go:130] >  Birth: -
	I0318 11:30:18.091008    4320 start.go:562] Will wait 60s for crictl version
	I0318 11:30:18.102400    4320 ssh_runner.go:195] Run: which crictl
	I0318 11:30:18.107934    4320 command_runner.go:130] > /usr/bin/crictl
	I0318 11:30:18.118695    4320 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 11:30:18.197327    4320 command_runner.go:130] > Version:  0.1.0
	I0318 11:30:18.197327    4320 command_runner.go:130] > RuntimeName:  docker
	I0318 11:30:18.197327    4320 command_runner.go:130] > RuntimeVersion:  25.0.4
	I0318 11:30:18.197327    4320 command_runner.go:130] > RuntimeApiVersion:  v1
	I0318 11:30:18.197327    4320 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 11:30:18.206049    4320 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:30:18.237121    4320 command_runner.go:130] > 25.0.4
	I0318 11:30:18.247446    4320 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:30:18.275767    4320 command_runner.go:130] > 25.0.4
	I0318 11:30:18.280785    4320 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 11:30:18.281437    4320 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 11:30:18.286344    4320 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 11:30:18.286381    4320 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 11:30:18.286381    4320 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 11:30:18.286427    4320 ip.go:207] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:93:df:b5 Flags:up|broadcast|multicast|running}
	I0318 11:30:18.288651    4320 ip.go:210] interface addr: fe80::572b:6b1d:9130:e88b/64
	I0318 11:30:18.288651    4320 ip.go:210] interface addr: 172.30.128.1/20
	I0318 11:30:18.296360    4320 ssh_runner.go:195] Run: grep 172.30.128.1	host.minikube.internal$ /etc/hosts
	I0318 11:30:18.301997    4320 command_runner.go:130] > 172.30.128.1	host.minikube.internal
	I0318 11:30:18.306266    4320 kubeadm.go:877] updating cluster {Name:functional-611000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.28.4 ClusterName:functional-611000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.129.196 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 11:30:18.306455    4320 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:30:18.313456    4320 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 11:30:18.334908    4320 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0318 11:30:18.335692    4320 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0318 11:30:18.335692    4320 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0318 11:30:18.335692    4320 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0318 11:30:18.335692    4320 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0318 11:30:18.335692    4320 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0318 11:30:18.335692    4320 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0318 11:30:18.335692    4320 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 11:30:18.335692    4320 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 11:30:18.335692    4320 docker.go:615] Images already preloaded, skipping extraction
	I0318 11:30:18.345288    4320 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 11:30:18.373104    4320 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0318 11:30:18.374003    4320 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0318 11:30:18.374003    4320 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0318 11:30:18.374048    4320 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0318 11:30:18.374048    4320 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0318 11:30:18.374048    4320 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0318 11:30:18.374048    4320 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0318 11:30:18.374048    4320 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 11:30:18.374048    4320 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 11:30:18.374048    4320 cache_images.go:84] Images are preloaded, skipping loading
	I0318 11:30:18.374048    4320 kubeadm.go:928] updating node { 172.30.129.196 8441 v1.28.4 docker true true} ...
	I0318 11:30:18.374048    4320 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-611000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.129.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:functional-611000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 11:30:18.382832    4320 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 11:30:18.411456    4320 command_runner.go:130] > cgroupfs
	I0318 11:30:18.411644    4320 cni.go:84] Creating CNI manager for ""
	I0318 11:30:18.411644    4320 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 11:30:18.411644    4320 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 11:30:18.411644    4320 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.30.129.196 APIServerPort:8441 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-611000 NodeName:functional-611000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.30.129.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.30.129.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 11:30:18.411644    4320 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.30.129.196
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-611000"
	  kubeletExtraArgs:
	    node-ip: 172.30.129.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.30.129.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 11:30:18.424651    4320 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 11:30:18.440593    4320 command_runner.go:130] > kubeadm
	I0318 11:30:18.440688    4320 command_runner.go:130] > kubectl
	I0318 11:30:18.440688    4320 command_runner.go:130] > kubelet
	I0318 11:30:18.440754    4320 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 11:30:18.450717    4320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 11:30:18.466990    4320 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0318 11:30:18.492134    4320 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 11:30:18.514291    4320 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0318 11:30:18.554789    4320 ssh_runner.go:195] Run: grep 172.30.129.196	control-plane.minikube.internal$ /etc/hosts
	I0318 11:30:18.557007    4320 command_runner.go:130] > 172.30.129.196	control-plane.minikube.internal
	I0318 11:30:18.570698    4320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:30:18.741265    4320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:30:18.768132    4320 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000 for IP: 172.30.129.196
	I0318 11:30:18.768132    4320 certs.go:194] generating shared ca certs ...
	I0318 11:30:18.768132    4320 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:30:18.768877    4320 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0318 11:30:18.769642    4320 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0318 11:30:18.769642    4320 certs.go:256] generating profile certs ...
	I0318 11:30:18.770309    4320 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.key
	I0318 11:30:18.770926    4320 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\apiserver.key.76d29e35
	I0318 11:30:18.770926    4320 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\proxy-client.key
	I0318 11:30:18.771513    4320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 11:30:18.771766    4320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 11:30:18.771962    4320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 11:30:18.772244    4320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 11:30:18.772388    4320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 11:30:18.772585    4320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 11:30:18.772756    4320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 11:30:18.772927    4320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 11:30:18.773461    4320 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem (1338 bytes)
	W0318 11:30:18.773932    4320 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424_empty.pem, impossibly tiny 0 bytes
	I0318 11:30:18.774103    4320 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0318 11:30:18.774417    4320 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 11:30:18.774417    4320 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 11:30:18.774417    4320 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 11:30:18.775445    4320 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem (1708 bytes)
	I0318 11:30:18.775445    4320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:30:18.775445    4320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem -> /usr/share/ca-certificates/13424.pem
	I0318 11:30:18.775445    4320 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /usr/share/ca-certificates/134242.pem
	I0318 11:30:18.777454    4320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 11:30:18.815566    4320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0318 11:30:18.850708    4320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 11:30:18.887656    4320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 11:30:18.924659    4320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 11:30:18.959786    4320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 11:30:18.996032    4320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 11:30:19.050128    4320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 11:30:19.132139    4320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 11:30:19.184354    4320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem --> /usr/share/ca-certificates/13424.pem (1338 bytes)
	I0318 11:30:19.238431    4320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /usr/share/ca-certificates/134242.pem (1708 bytes)
	I0318 11:30:19.286456    4320 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 11:30:19.340957    4320 ssh_runner.go:195] Run: openssl version
	I0318 11:30:19.349681    4320 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0318 11:30:19.363513    4320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134242.pem && ln -fs /usr/share/ca-certificates/134242.pem /etc/ssl/certs/134242.pem"
	I0318 11:30:19.399239    4320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134242.pem
	I0318 11:30:19.406559    4320 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 18 11:25 /usr/share/ca-certificates/134242.pem
	I0318 11:30:19.406559    4320 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 11:25 /usr/share/ca-certificates/134242.pem
	I0318 11:30:19.419710    4320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134242.pem
	I0318 11:30:19.431061    4320 command_runner.go:130] > 3ec20f2e
	I0318 11:30:19.444021    4320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134242.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 11:30:19.478371    4320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 11:30:19.527493    4320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:30:19.536535    4320 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 18 11:07 /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:30:19.536535    4320 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 11:07 /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:30:19.551173    4320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:30:19.553900    4320 command_runner.go:130] > b5213941
	I0318 11:30:19.572886    4320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 11:30:19.607591    4320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13424.pem && ln -fs /usr/share/ca-certificates/13424.pem /etc/ssl/certs/13424.pem"
	I0318 11:30:19.641069    4320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13424.pem
	I0318 11:30:19.647235    4320 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 18 11:25 /usr/share/ca-certificates/13424.pem
	I0318 11:30:19.647235    4320 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 11:25 /usr/share/ca-certificates/13424.pem
	I0318 11:30:19.663053    4320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13424.pem
	I0318 11:30:19.670295    4320 command_runner.go:130] > 51391683
	I0318 11:30:19.683053    4320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13424.pem /etc/ssl/certs/51391683.0"
	I0318 11:30:19.716128    4320 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 11:30:19.724434    4320 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 11:30:19.724434    4320 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0318 11:30:19.724434    4320 command_runner.go:130] > Device: 8,1	Inode: 2101030     Links: 1
	I0318 11:30:19.724434    4320 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 11:30:19.724434    4320 command_runner.go:130] > Access: 2024-03-18 11:28:14.337313350 +0000
	I0318 11:30:19.724434    4320 command_runner.go:130] > Modify: 2024-03-18 11:28:14.337313350 +0000
	I0318 11:30:19.724434    4320 command_runner.go:130] > Change: 2024-03-18 11:28:14.337313350 +0000
	I0318 11:30:19.724552    4320 command_runner.go:130] >  Birth: 2024-03-18 11:28:14.337313350 +0000
	I0318 11:30:19.735999    4320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 11:30:19.744792    4320 command_runner.go:130] > Certificate will not expire
	I0318 11:30:19.755468    4320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 11:30:19.767379    4320 command_runner.go:130] > Certificate will not expire
	I0318 11:30:19.780258    4320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 11:30:19.788092    4320 command_runner.go:130] > Certificate will not expire
	I0318 11:30:19.800995    4320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 11:30:19.812048    4320 command_runner.go:130] > Certificate will not expire
	I0318 11:30:19.826951    4320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 11:30:19.837307    4320 command_runner.go:130] > Certificate will not expire
	I0318 11:30:19.852124    4320 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 11:30:19.863427    4320 command_runner.go:130] > Certificate will not expire
	I0318 11:30:19.863427    4320 kubeadm.go:391] StartCluster: {Name:functional-611000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:functional-611000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.129.196 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:30:19.873841    4320 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 11:30:19.945302    4320 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 11:30:19.961735    4320 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0318 11:30:19.961810    4320 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0318 11:30:19.961810    4320 command_runner.go:130] > /var/lib/minikube/etcd:
	I0318 11:30:19.961873    4320 command_runner.go:130] > member
	W0318 11:30:19.961920    4320 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 11:30:19.961943    4320 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 11:30:19.961943    4320 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 11:30:19.973295    4320 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 11:30:19.988963    4320 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 11:30:19.990316    4320 kubeconfig.go:125] found "functional-611000" server: "https://172.30.129.196:8441"
	I0318 11:30:19.991881    4320 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 11:30:19.992447    4320 kapi.go:59] client config for functional-611000: &rest.Config{Host:"https://172.30.129.196:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-611000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-611000\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e8b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 11:30:19.993770    4320 cert_rotation.go:137] Starting client certificate rotation controller
	I0318 11:30:20.005026    4320 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 11:30:20.041853    4320 kubeadm.go:624] The running cluster does not require reconfiguration: 172.30.129.196
	I0318 11:30:20.041853    4320 kubeadm.go:1154] stopping kube-system containers ...
	I0318 11:30:20.055422    4320 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 11:30:20.146117    4320 command_runner.go:130] > aa52b0846b54
	I0318 11:30:20.146157    4320 command_runner.go:130] > 09072322ba1e
	I0318 11:30:20.146157    4320 command_runner.go:130] > 2bfb1b42bdc5
	I0318 11:30:20.146157    4320 command_runner.go:130] > 7ceb9e5328a9
	I0318 11:30:20.146157    4320 command_runner.go:130] > d932944e55e5
	I0318 11:30:20.146157    4320 command_runner.go:130] > d2f1e6a8164c
	I0318 11:30:20.146157    4320 command_runner.go:130] > f21c91a0234d
	I0318 11:30:20.146157    4320 command_runner.go:130] > 222b8eb8da22
	I0318 11:30:20.146157    4320 command_runner.go:130] > dbf4c4348913
	I0318 11:30:20.146157    4320 command_runner.go:130] > c0386d4da284
	I0318 11:30:20.146157    4320 command_runner.go:130] > 32cf58c500a5
	I0318 11:30:20.146157    4320 command_runner.go:130] > 8a29c7935dd0
	I0318 11:30:20.146157    4320 command_runner.go:130] > 4a9a1adab613
	I0318 11:30:20.146157    4320 command_runner.go:130] > f7f42619f4a1
	I0318 11:30:20.146157    4320 command_runner.go:130] > d9e1d566a455
	I0318 11:30:20.146157    4320 command_runner.go:130] > e1025274c420
	I0318 11:30:20.146157    4320 command_runner.go:130] > bf2283aa4371
	I0318 11:30:20.146157    4320 command_runner.go:130] > 556af51d9fb6
	I0318 11:30:20.146157    4320 command_runner.go:130] > 66dd473db382
	I0318 11:30:20.146157    4320 command_runner.go:130] > 0779dc2162a6
	I0318 11:30:20.146157    4320 command_runner.go:130] > e8482b1f4b69
	I0318 11:30:20.146157    4320 command_runner.go:130] > a6ae84d6b378
	I0318 11:30:20.146157    4320 command_runner.go:130] > d7cb55ca8167
	I0318 11:30:20.146157    4320 docker.go:483] Stopping containers: [aa52b0846b54 09072322ba1e 2bfb1b42bdc5 7ceb9e5328a9 d932944e55e5 d2f1e6a8164c f21c91a0234d 222b8eb8da22 dbf4c4348913 c0386d4da284 32cf58c500a5 8a29c7935dd0 4a9a1adab613 f7f42619f4a1 d9e1d566a455 e1025274c420 bf2283aa4371 556af51d9fb6 66dd473db382 0779dc2162a6 e8482b1f4b69 a6ae84d6b378 d7cb55ca8167]
	I0318 11:30:20.156734    4320 ssh_runner.go:195] Run: docker stop aa52b0846b54 09072322ba1e 2bfb1b42bdc5 7ceb9e5328a9 d932944e55e5 d2f1e6a8164c f21c91a0234d 222b8eb8da22 dbf4c4348913 c0386d4da284 32cf58c500a5 8a29c7935dd0 4a9a1adab613 f7f42619f4a1 d9e1d566a455 e1025274c420 bf2283aa4371 556af51d9fb6 66dd473db382 0779dc2162a6 e8482b1f4b69 a6ae84d6b378 d7cb55ca8167
	I0318 11:30:21.724359    4320 command_runner.go:130] > aa52b0846b54
	I0318 11:30:21.724359    4320 command_runner.go:130] > 09072322ba1e
	I0318 11:30:21.724359    4320 command_runner.go:130] > 2bfb1b42bdc5
	I0318 11:30:21.725745    4320 command_runner.go:130] > 7ceb9e5328a9
	I0318 11:30:21.725745    4320 command_runner.go:130] > d932944e55e5
	I0318 11:30:21.725745    4320 command_runner.go:130] > d2f1e6a8164c
	I0318 11:30:21.725745    4320 command_runner.go:130] > f21c91a0234d
	I0318 11:30:21.725745    4320 command_runner.go:130] > 222b8eb8da22
	I0318 11:30:21.725745    4320 command_runner.go:130] > dbf4c4348913
	I0318 11:30:21.725745    4320 command_runner.go:130] > c0386d4da284
	I0318 11:30:21.725745    4320 command_runner.go:130] > 32cf58c500a5
	I0318 11:30:21.725745    4320 command_runner.go:130] > 8a29c7935dd0
	I0318 11:30:21.725745    4320 command_runner.go:130] > 4a9a1adab613
	I0318 11:30:21.725745    4320 command_runner.go:130] > f7f42619f4a1
	I0318 11:30:21.725745    4320 command_runner.go:130] > d9e1d566a455
	I0318 11:30:21.725839    4320 command_runner.go:130] > e1025274c420
	I0318 11:30:21.725839    4320 command_runner.go:130] > bf2283aa4371
	I0318 11:30:21.725839    4320 command_runner.go:130] > 556af51d9fb6
	I0318 11:30:21.725839    4320 command_runner.go:130] > 66dd473db382
	I0318 11:30:21.725839    4320 command_runner.go:130] > 0779dc2162a6
	I0318 11:30:21.725839    4320 command_runner.go:130] > e8482b1f4b69
	I0318 11:30:21.725907    4320 command_runner.go:130] > a6ae84d6b378
	I0318 11:30:21.725907    4320 command_runner.go:130] > d7cb55ca8167
	I0318 11:30:21.725983    4320 ssh_runner.go:235] Completed: docker stop aa52b0846b54 09072322ba1e 2bfb1b42bdc5 7ceb9e5328a9 d932944e55e5 d2f1e6a8164c f21c91a0234d 222b8eb8da22 dbf4c4348913 c0386d4da284 32cf58c500a5 8a29c7935dd0 4a9a1adab613 f7f42619f4a1 d9e1d566a455 e1025274c420 bf2283aa4371 556af51d9fb6 66dd473db382 0779dc2162a6 e8482b1f4b69 a6ae84d6b378 d7cb55ca8167: (1.569161s)
	I0318 11:30:21.737013    4320 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 11:30:21.797731    4320 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 11:30:21.800390    4320 command_runner.go:130] > -rw------- 1 root root 5643 Mar 18 11:28 /etc/kubernetes/admin.conf
	I0318 11:30:21.813320    4320 command_runner.go:130] > -rw------- 1 root root 5658 Mar 18 11:28 /etc/kubernetes/controller-manager.conf
	I0318 11:30:21.813320    4320 command_runner.go:130] > -rw------- 1 root root 2007 Mar 18 11:28 /etc/kubernetes/kubelet.conf
	I0318 11:30:21.813320    4320 command_runner.go:130] > -rw------- 1 root root 5602 Mar 18 11:28 /etc/kubernetes/scheduler.conf
	I0318 11:30:21.813320    4320 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Mar 18 11:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Mar 18 11:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Mar 18 11:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Mar 18 11:28 /etc/kubernetes/scheduler.conf
	
	I0318 11:30:21.825517    4320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0318 11:30:21.840979    4320 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0318 11:30:21.852675    4320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0318 11:30:21.868464    4320 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0318 11:30:21.879865    4320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0318 11:30:21.894835    4320 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 11:30:21.905927    4320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 11:30:21.930796    4320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0318 11:30:21.948329    4320 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 11:30:21.959837    4320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 11:30:21.988569    4320 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 11:30:22.006466    4320 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 11:30:22.089383    4320 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 11:30:22.089383    4320 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0318 11:30:22.089383    4320 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0318 11:30:22.089383    4320 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 11:30:22.089383    4320 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0318 11:30:22.089383    4320 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0318 11:30:22.089383    4320 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0318 11:30:22.089383    4320 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0318 11:30:22.089383    4320 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0318 11:30:22.089383    4320 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 11:30:22.089383    4320 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 11:30:22.089383    4320 command_runner.go:130] > [certs] Using the existing "sa" key
	I0318 11:30:22.089383    4320 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 11:30:23.366149    4320 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 11:30:23.366149    4320 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0318 11:30:23.366149    4320 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0318 11:30:23.366149    4320 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 11:30:23.366149    4320 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 11:30:23.366149    4320 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2767568s)
	I0318 11:30:23.366149    4320 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 11:30:23.439832    4320 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 11:30:23.439832    4320 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 11:30:23.439832    4320 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0318 11:30:23.616108    4320 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 11:30:23.682861    4320 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 11:30:23.686427    4320 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 11:30:23.687533    4320 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 11:30:23.689740    4320 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 11:30:23.696903    4320 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 11:30:23.764733    4320 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 11:30:23.768811    4320 api_server.go:52] waiting for apiserver process to appear ...
	I0318 11:30:23.780661    4320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 11:30:24.289276    4320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 11:30:24.782136    4320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 11:30:25.290699    4320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 11:30:25.303279    4320 command_runner.go:130] > 6872
	I0318 11:30:25.303279    4320 api_server.go:72] duration metric: took 1.5344568s to wait for apiserver process to appear ...
	I0318 11:30:25.303279    4320 api_server.go:88] waiting for apiserver healthz status ...
	I0318 11:30:25.303279    4320 api_server.go:253] Checking apiserver healthz at https://172.30.129.196:8441/healthz ...
	I0318 11:30:27.867326    4320 api_server.go:279] https://172.30.129.196:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 11:30:27.872321    4320 api_server.go:103] status: https://172.30.129.196:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 11:30:27.872321    4320 api_server.go:253] Checking apiserver healthz at https://172.30.129.196:8441/healthz ...
	I0318 11:30:27.938655    4320 api_server.go:279] https://172.30.129.196:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 11:30:27.947480    4320 api_server.go:103] status: https://172.30.129.196:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 11:30:28.308200    4320 api_server.go:253] Checking apiserver healthz at https://172.30.129.196:8441/healthz ...
	I0318 11:30:28.318085    4320 api_server.go:279] https://172.30.129.196:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 11:30:28.318294    4320 api_server.go:103] status: https://172.30.129.196:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 11:30:28.810758    4320 api_server.go:253] Checking apiserver healthz at https://172.30.129.196:8441/healthz ...
	I0318 11:30:28.819241    4320 api_server.go:279] https://172.30.129.196:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 11:30:28.819443    4320 api_server.go:103] status: https://172.30.129.196:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 11:30:29.313274    4320 api_server.go:253] Checking apiserver healthz at https://172.30.129.196:8441/healthz ...
	I0318 11:30:29.320572    4320 api_server.go:279] https://172.30.129.196:8441/healthz returned 200:
	ok
	I0318 11:30:29.320572    4320 round_trippers.go:463] GET https://172.30.129.196:8441/version
	I0318 11:30:29.320572    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:29.320572    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:29.320572    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:29.336728    4320 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0318 11:30:29.336728    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:29.336728    4320 round_trippers.go:580]     Audit-Id: d43833cd-be54-4972-a316-cd11cb94ac4b
	I0318 11:30:29.336728    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:29.336728    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:29.336728    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:29.336728    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:29.336728    4320 round_trippers.go:580]     Content-Length: 264
	I0318 11:30:29.336728    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:29 GMT
	I0318 11:30:29.336937    4320 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0318 11:30:29.337150    4320 api_server.go:141] control plane version: v1.28.4
	I0318 11:30:29.337150    4320 api_server.go:131] duration metric: took 4.0338412s to wait for apiserver health ...
	I0318 11:30:29.337260    4320 cni.go:84] Creating CNI manager for ""
	I0318 11:30:29.337260    4320 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 11:30:29.340725    4320 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 11:30:29.358044    4320 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 11:30:29.378024    4320 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 11:30:29.411845    4320 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 11:30:29.412023    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods
	I0318 11:30:29.412089    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:29.412089    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:29.412151    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:29.422626    4320 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 11:30:29.427439    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:29.427439    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:29.427513    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:29.427513    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:29 GMT
	I0318 11:30:29.427513    4320 round_trippers.go:580]     Audit-Id: 63d1abfb-7768-4da3-8af5-cea00a32dbf9
	I0318 11:30:29.427513    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:29.427513    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:29.429937    4320 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"500"},"items":[{"metadata":{"name":"coredns-5dd5756b68-sxq4l","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ebb5db0-da82-4fc4-bc0e-3e680e05723f","resourceVersion":"493","creationTimestamp":"2024-03-18T11:28:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1c32b30a-52a3-480f-9cf3-8d149b086fd4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c32b30a-52a3-480f-9cf3-8d149b086fd4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50297 chars]
	I0318 11:30:29.435721    4320 system_pods.go:59] 7 kube-system pods found
	I0318 11:30:29.435721    4320 system_pods.go:61] "coredns-5dd5756b68-sxq4l" [5ebb5db0-da82-4fc4-bc0e-3e680e05723f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 11:30:29.435721    4320 system_pods.go:61] "etcd-functional-611000" [0b98738c-76ab-4e2c-aa64-eadc49f8338a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 11:30:29.435721    4320 system_pods.go:61] "kube-apiserver-functional-611000" [f4f6d73c-451d-4848-a7e4-0f9097554e64] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 11:30:29.435721    4320 system_pods.go:61] "kube-controller-manager-functional-611000" [36a3f019-501a-46c9-9431-24e94f6c0ea9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 11:30:29.435721    4320 system_pods.go:61] "kube-proxy-sh9ps" [da5ff102-6ce0-4f7c-bd28-923b0dbd135f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 11:30:29.435721    4320 system_pods.go:61] "kube-scheduler-functional-611000" [460cc842-8959-4a52-9ebf-79401a8fd0eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 11:30:29.435721    4320 system_pods.go:61] "storage-provisioner" [ea9b5188-5461-42f3-af2a-1c0652841cdd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 11:30:29.435721    4320 system_pods.go:74] duration metric: took 23.8753ms to wait for pod list to return data ...
	I0318 11:30:29.435721    4320 node_conditions.go:102] verifying NodePressure condition ...
	I0318 11:30:29.435721    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes
	I0318 11:30:29.435721    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:29.435721    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:29.435721    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:29.446202    4320 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 11:30:29.448962    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:29.448962    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:29 GMT
	I0318 11:30:29.448962    4320 round_trippers.go:580]     Audit-Id: 57929810-1e56-40c9-82cc-9b9a9acbf64b
	I0318 11:30:29.448962    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:29.448962    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:29.448962    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:29.448962    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:29.449267    4320 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"500"},"items":[{"metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4840 chars]
	I0318 11:30:29.450991    4320 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:30:29.451056    4320 node_conditions.go:123] node cpu capacity is 2
	I0318 11:30:29.451099    4320 node_conditions.go:105] duration metric: took 15.3353ms to run NodePressure ...
	I0318 11:30:29.451099    4320 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 11:30:29.812055    4320 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0318 11:30:29.812113    4320 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0318 11:30:29.812166    4320 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 11:30:29.812432    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0318 11:30:29.812432    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:29.812488    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:29.812488    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:29.817435    4320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:30:29.817435    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:29.823211    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:29.823211    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:29.823211    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:29.823211    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:29 GMT
	I0318 11:30:29.823211    4320 round_trippers.go:580]     Audit-Id: fff3c223-7520-4a1f-a851-37bab7580b1c
	I0318 11:30:29.823211    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:29.824040    4320 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"502"},"items":[{"metadata":{"name":"etcd-functional-611000","namespace":"kube-system","uid":"0b98738c-76ab-4e2c-aa64-eadc49f8338a","resourceVersion":"497","creationTimestamp":"2024-03-18T11:28:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.196:2379","kubernetes.io/config.hash":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.mirror":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.seen":"2024-03-18T11:28:17.673580165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30328 chars]
	I0318 11:30:29.825856    4320 kubeadm.go:733] kubelet initialised
	I0318 11:30:29.825899    4320 kubeadm.go:734] duration metric: took 13.7338ms waiting for restarted kubelet to initialise ...
	I0318 11:30:29.825990    4320 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:30:29.826035    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods
	I0318 11:30:29.826101    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:29.826150    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:29.826150    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:29.828545    4320 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:30:29.828545    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:29.828545    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:29.828545    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:29 GMT
	I0318 11:30:29.828545    4320 round_trippers.go:580]     Audit-Id: 74398af8-2b93-4ac7-8d1a-0d296385aad3
	I0318 11:30:29.829545    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:29.829545    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:29.829545    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:29.830690    4320 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"502"},"items":[{"metadata":{"name":"coredns-5dd5756b68-sxq4l","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ebb5db0-da82-4fc4-bc0e-3e680e05723f","resourceVersion":"493","creationTimestamp":"2024-03-18T11:28:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1c32b30a-52a3-480f-9cf3-8d149b086fd4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c32b30a-52a3-480f-9cf3-8d149b086fd4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50297 chars]
	I0318 11:30:29.833457    4320 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-sxq4l" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:29.833672    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sxq4l
	I0318 11:30:29.833723    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:29.833723    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:29.833723    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:29.835648    4320 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:30:29.835648    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:29.837102    4320 round_trippers.go:580]     Audit-Id: 0d365b6d-4979-49ec-9550-7a789253d9f0
	I0318 11:30:29.837102    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:29.837102    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:29.837166    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:29.837166    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:29.837166    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:29 GMT
	I0318 11:30:29.837493    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sxq4l","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ebb5db0-da82-4fc4-bc0e-3e680e05723f","resourceVersion":"493","creationTimestamp":"2024-03-18T11:28:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1c32b30a-52a3-480f-9cf3-8d149b086fd4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c32b30a-52a3-480f-9cf3-8d149b086fd4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6208 chars]
	I0318 11:30:29.838723    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:29.838789    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:29.838789    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:29.838789    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:29.839596    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:29.839596    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:29.839596    4320 round_trippers.go:580]     Audit-Id: 4587526d-fffb-469d-b9b9-522ff279aec6
	I0318 11:30:29.839596    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:29.839596    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:29.839596    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:29.841616    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:29.841616    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:29 GMT
	I0318 11:30:29.842234    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:30.339938    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sxq4l
	I0318 11:30:30.340102    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:30.340102    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:30.340102    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:30.357818    4320 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0318 11:30:30.357818    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:30.357818    4320 round_trippers.go:580]     Audit-Id: 517991d3-bd90-4430-b32a-f09c5f8c6d06
	I0318 11:30:30.357818    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:30.357818    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:30.357818    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:30.357818    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:30.357818    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:30 GMT
	I0318 11:30:30.357818    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sxq4l","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ebb5db0-da82-4fc4-bc0e-3e680e05723f","resourceVersion":"504","creationTimestamp":"2024-03-18T11:28:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1c32b30a-52a3-480f-9cf3-8d149b086fd4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c32b30a-52a3-480f-9cf3-8d149b086fd4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6385 chars]
	I0318 11:30:30.361672    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:30.361672    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:30.361672    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:30.361672    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:30.362415    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:30.362415    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:30.365853    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:30.365853    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:30.365853    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:30 GMT
	I0318 11:30:30.365853    4320 round_trippers.go:580]     Audit-Id: 176b0cc1-8f93-4644-bafb-7710b670e2f0
	I0318 11:30:30.365853    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:30.365853    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:30.366110    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:30.835953    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sxq4l
	I0318 11:30:30.835953    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:30.835953    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:30.835953    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:30.836485    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:30.836485    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:30.836485    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:30.836485    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:30.840186    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:30.840186    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:30.840186    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:30 GMT
	I0318 11:30:30.840186    4320 round_trippers.go:580]     Audit-Id: 00a001c1-bc95-4d3c-a880-36ac40be8508
	I0318 11:30:30.840493    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sxq4l","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ebb5db0-da82-4fc4-bc0e-3e680e05723f","resourceVersion":"504","creationTimestamp":"2024-03-18T11:28:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1c32b30a-52a3-480f-9cf3-8d149b086fd4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c32b30a-52a3-480f-9cf3-8d149b086fd4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6385 chars]
	I0318 11:30:30.841254    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:30.841254    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:30.841254    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:30.841254    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:30.845047    4320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:30:30.845047    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:30.845047    4320 round_trippers.go:580]     Audit-Id: 13b083fe-632b-4fb7-86f6-cb2920c3f9da
	I0318 11:30:30.845047    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:30.845047    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:30.845047    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:30.845047    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:30.845047    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:30 GMT
	I0318 11:30:30.845047    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:31.339578    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sxq4l
	I0318 11:30:31.339578    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:31.339578    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:31.339578    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:31.340437    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:31.345414    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:31.345442    4320 round_trippers.go:580]     Audit-Id: e5437715-4cfe-4ddd-9b2b-248f4f9efb0f
	I0318 11:30:31.345442    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:31.345442    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:31.345442    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:31.345442    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:31.345442    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:31 GMT
	I0318 11:30:31.345442    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sxq4l","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ebb5db0-da82-4fc4-bc0e-3e680e05723f","resourceVersion":"504","creationTimestamp":"2024-03-18T11:28:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1c32b30a-52a3-480f-9cf3-8d149b086fd4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c32b30a-52a3-480f-9cf3-8d149b086fd4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6385 chars]
	I0318 11:30:31.346358    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:31.346358    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:31.346358    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:31.346358    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:31.348198    4320 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:30:31.349204    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:31.349248    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:31.349248    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:31.349248    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:31 GMT
	I0318 11:30:31.349248    4320 round_trippers.go:580]     Audit-Id: 9cae872b-f68c-4347-835d-8a840acb4bd4
	I0318 11:30:31.349248    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:31.349248    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:31.349456    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:31.843233    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sxq4l
	I0318 11:30:31.843233    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:31.843233    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:31.843233    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:31.847053    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:31.847053    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:31.847053    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:31.847053    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:31 GMT
	I0318 11:30:31.847053    4320 round_trippers.go:580]     Audit-Id: ab3e1d5a-2060-48e9-9906-3946c980af51
	I0318 11:30:31.847053    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:31.847053    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:31.847053    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:31.847053    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sxq4l","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ebb5db0-da82-4fc4-bc0e-3e680e05723f","resourceVersion":"504","creationTimestamp":"2024-03-18T11:28:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1c32b30a-52a3-480f-9cf3-8d149b086fd4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c32b30a-52a3-480f-9cf3-8d149b086fd4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6385 chars]
	I0318 11:30:31.848028    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:31.848091    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:31.848091    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:31.848091    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:31.848316    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:31.848316    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:31.848316    4320 round_trippers.go:580]     Audit-Id: 034d48d1-a2ae-4f13-8530-9101f3c64cfe
	I0318 11:30:31.848316    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:31.851205    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:31.851205    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:31.851205    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:31.851205    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:31 GMT
	I0318 11:30:31.851439    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:31.851976    4320 pod_ready.go:102] pod "coredns-5dd5756b68-sxq4l" in "kube-system" namespace has status "Ready":"False"
	I0318 11:30:32.346214    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sxq4l
	I0318 11:30:32.346291    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:32.346291    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:32.346291    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:32.346692    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:32.350803    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:32.350803    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:32.350803    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:32.350803    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:32.350803    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:32.350803    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:32 GMT
	I0318 11:30:32.350803    4320 round_trippers.go:580]     Audit-Id: 32ca4bed-f661-4880-a72c-c3168618b657
	I0318 11:30:32.350888    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sxq4l","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ebb5db0-da82-4fc4-bc0e-3e680e05723f","resourceVersion":"504","creationTimestamp":"2024-03-18T11:28:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1c32b30a-52a3-480f-9cf3-8d149b086fd4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c32b30a-52a3-480f-9cf3-8d149b086fd4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6385 chars]
	I0318 11:30:32.351998    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:32.351998    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:32.351998    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:32.352106    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:32.355492    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:32.355492    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:32.355492    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:32.355492    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:32.355492    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:32 GMT
	I0318 11:30:32.355492    4320 round_trippers.go:580]     Audit-Id: e2134094-667c-4c56-8e42-a2ceea48f0a2
	I0318 11:30:32.355492    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:32.355492    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:32.355827    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:32.848796    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sxq4l
	I0318 11:30:32.848891    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:32.848974    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:32.848974    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:32.849241    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:32.849241    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:32.852994    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:32.852994    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:32.852994    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:32.852994    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:32.852994    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:32 GMT
	I0318 11:30:32.852994    4320 round_trippers.go:580]     Audit-Id: 0470b44a-5d56-489d-9aee-0bc8c27ec71e
	I0318 11:30:32.852994    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sxq4l","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ebb5db0-da82-4fc4-bc0e-3e680e05723f","resourceVersion":"504","creationTimestamp":"2024-03-18T11:28:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1c32b30a-52a3-480f-9cf3-8d149b086fd4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c32b30a-52a3-480f-9cf3-8d149b086fd4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6385 chars]
	I0318 11:30:32.854073    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:32.854149    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:32.854149    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:32.854149    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:32.856940    4320 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:30:32.856940    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:32.856940    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:32.856940    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:32.857889    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:32.857922    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:32 GMT
	I0318 11:30:32.857951    4320 round_trippers.go:580]     Audit-Id: f107901f-82c7-483a-9eef-019efbec0564
	I0318 11:30:32.857951    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:32.857951    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:33.348781    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sxq4l
	I0318 11:30:33.348781    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:33.348781    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:33.348781    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:33.349479    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:33.349479    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:33.349479    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:33.353515    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:33.353515    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:33.353515    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:33 GMT
	I0318 11:30:33.353515    4320 round_trippers.go:580]     Audit-Id: 58aeca69-fe6a-43a6-b70a-6d6183a11f8d
	I0318 11:30:33.353515    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:33.353732    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sxq4l","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ebb5db0-da82-4fc4-bc0e-3e680e05723f","resourceVersion":"504","creationTimestamp":"2024-03-18T11:28:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1c32b30a-52a3-480f-9cf3-8d149b086fd4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c32b30a-52a3-480f-9cf3-8d149b086fd4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6385 chars]
	I0318 11:30:33.354539    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:33.354614    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:33.354614    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:33.354614    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:33.354812    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:33.357656    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:33.357656    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:33.357656    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:33.357656    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:33 GMT
	I0318 11:30:33.357656    4320 round_trippers.go:580]     Audit-Id: be2e8d65-10e8-41b6-ac36-b67dc718b556
	I0318 11:30:33.357656    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:33.357656    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:33.357839    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:33.834754    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sxq4l
	I0318 11:30:33.834754    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:33.834754    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:33.834754    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:33.835401    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:33.835401    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:33.838523    4320 round_trippers.go:580]     Audit-Id: a58b7b8b-a544-4bb6-8be6-ba914253adc4
	I0318 11:30:33.838523    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:33.838523    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:33.838523    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:33.838523    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:33.838523    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:33 GMT
	I0318 11:30:33.838617    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sxq4l","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ebb5db0-da82-4fc4-bc0e-3e680e05723f","resourceVersion":"558","creationTimestamp":"2024-03-18T11:28:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1c32b30a-52a3-480f-9cf3-8d149b086fd4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c32b30a-52a3-480f-9cf3-8d149b086fd4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6156 chars]
	I0318 11:30:33.839508    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:33.839585    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:33.839585    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:33.839585    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:33.843188    4320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:30:33.843399    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:33.843399    4320 round_trippers.go:580]     Audit-Id: b5f30f4a-1190-4edc-adb2-1d6ce1f80433
	I0318 11:30:33.843399    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:33.843399    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:33.843468    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:33.843468    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:33.843468    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:33 GMT
	I0318 11:30:33.843721    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:33.844419    4320 pod_ready.go:92] pod "coredns-5dd5756b68-sxq4l" in "kube-system" namespace has status "Ready":"True"
	I0318 11:30:33.844451    4320 pod_ready.go:81] duration metric: took 4.0108851s for pod "coredns-5dd5756b68-sxq4l" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:33.844526    4320 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-611000" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:33.844680    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/etcd-functional-611000
	I0318 11:30:33.844702    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:33.844702    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:33.844702    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:33.849379    4320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:30:33.849379    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:33.849379    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:33 GMT
	I0318 11:30:33.849379    4320 round_trippers.go:580]     Audit-Id: d4ddfe20-397b-45a4-baaf-b84e8363bcd2
	I0318 11:30:33.849379    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:33.849379    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:33.849379    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:33.849379    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:33.849379    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-611000","namespace":"kube-system","uid":"0b98738c-76ab-4e2c-aa64-eadc49f8338a","resourceVersion":"497","creationTimestamp":"2024-03-18T11:28:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.196:2379","kubernetes.io/config.hash":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.mirror":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.seen":"2024-03-18T11:28:17.673580165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6446 chars]
	I0318 11:30:33.850211    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:33.850211    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:33.850211    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:33.850211    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:33.852622    4320 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:30:33.852622    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:33.852622    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:33.852622    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:33.852622    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:33 GMT
	I0318 11:30:33.852622    4320 round_trippers.go:580]     Audit-Id: aa3e1ccd-44e8-4b88-a585-2a0b944176d4
	I0318 11:30:33.852622    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:33.852622    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:33.852622    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:34.356504    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/etcd-functional-611000
	I0318 11:30:34.356598    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:34.356598    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:34.356598    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:34.361660    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:34.361660    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:34.361660    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:34.361660    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:34.361660    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:34 GMT
	I0318 11:30:34.361660    4320 round_trippers.go:580]     Audit-Id: 01d6ce48-cb5f-4d46-956e-864a21003b97
	I0318 11:30:34.361660    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:34.361660    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:34.361915    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-611000","namespace":"kube-system","uid":"0b98738c-76ab-4e2c-aa64-eadc49f8338a","resourceVersion":"497","creationTimestamp":"2024-03-18T11:28:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.196:2379","kubernetes.io/config.hash":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.mirror":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.seen":"2024-03-18T11:28:17.673580165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6446 chars]
	I0318 11:30:34.362595    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:34.362595    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:34.362595    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:34.362679    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:34.362873    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:34.365842    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:34.365899    4320 round_trippers.go:580]     Audit-Id: 094e332a-4f1e-4154-b7b4-a0e85e8b6495
	I0318 11:30:34.365899    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:34.365899    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:34.365899    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:34.365963    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:34.365963    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:34 GMT
	I0318 11:30:34.366190    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:34.852472    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/etcd-functional-611000
	I0318 11:30:34.852721    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:34.852721    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:34.852721    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:34.852997    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:34.856789    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:34.856789    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:34 GMT
	I0318 11:30:34.856789    4320 round_trippers.go:580]     Audit-Id: fd0e61e2-38e4-46e3-b417-9d0fb9e1b31d
	I0318 11:30:34.856789    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:34.856789    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:34.856789    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:34.856789    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:34.857019    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-611000","namespace":"kube-system","uid":"0b98738c-76ab-4e2c-aa64-eadc49f8338a","resourceVersion":"497","creationTimestamp":"2024-03-18T11:28:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.196:2379","kubernetes.io/config.hash":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.mirror":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.seen":"2024-03-18T11:28:17.673580165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6446 chars]
	I0318 11:30:34.857782    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:34.857782    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:34.857882    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:34.857882    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:34.858047    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:34.860728    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:34.860728    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:34 GMT
	I0318 11:30:34.860728    4320 round_trippers.go:580]     Audit-Id: 407ef878-c912-4314-b47c-46df0ff31749
	I0318 11:30:34.860728    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:34.860818    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:34.860818    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:34.860818    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:34.860945    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:35.355429    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/etcd-functional-611000
	I0318 11:30:35.355429    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:35.355429    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:35.355429    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:35.356085    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:35.356085    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:35.356085    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:35.356085    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:35.356085    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:35.356085    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:35.356085    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:35 GMT
	I0318 11:30:35.356085    4320 round_trippers.go:580]     Audit-Id: b467f0b5-6d7d-4003-8247-78cb8f66f852
	I0318 11:30:35.360427    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-611000","namespace":"kube-system","uid":"0b98738c-76ab-4e2c-aa64-eadc49f8338a","resourceVersion":"497","creationTimestamp":"2024-03-18T11:28:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.196:2379","kubernetes.io/config.hash":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.mirror":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.seen":"2024-03-18T11:28:17.673580165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6446 chars]
	I0318 11:30:35.361307    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:35.361307    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:35.361307    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:35.361307    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:35.361853    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:35.364610    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:35.364610    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:35 GMT
	I0318 11:30:35.364610    4320 round_trippers.go:580]     Audit-Id: 29ddbfbc-c654-494c-8405-f59a0821c039
	I0318 11:30:35.364675    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:35.364675    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:35.364675    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:35.364675    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:35.365309    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:35.848292    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/etcd-functional-611000
	I0318 11:30:35.848600    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:35.848600    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:35.848600    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:35.848885    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:35.848885    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:35.851840    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:35 GMT
	I0318 11:30:35.851840    4320 round_trippers.go:580]     Audit-Id: 7e4808dc-7c77-4bd9-817d-fb447205e4db
	I0318 11:30:35.851840    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:35.851840    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:35.851840    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:35.851840    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:35.851840    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-611000","namespace":"kube-system","uid":"0b98738c-76ab-4e2c-aa64-eadc49f8338a","resourceVersion":"497","creationTimestamp":"2024-03-18T11:28:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.196:2379","kubernetes.io/config.hash":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.mirror":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.seen":"2024-03-18T11:28:17.673580165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6446 chars]
	I0318 11:30:35.852919    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:35.852919    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:35.852919    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:35.852919    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:35.857923    4320 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:30:35.857923    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:35.857923    4320 round_trippers.go:580]     Audit-Id: dae08d04-c1e0-481a-8835-5309427b6e7d
	I0318 11:30:35.857923    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:35.857923    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:35.857923    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:35.857923    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:35.857923    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:35 GMT
	I0318 11:30:35.858576    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:35.858576    4320 pod_ready.go:102] pod "etcd-functional-611000" in "kube-system" namespace has status "Ready":"False"
	I0318 11:30:36.357940    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/etcd-functional-611000
	I0318 11:30:36.357940    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:36.357940    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:36.357940    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:36.358491    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:36.358491    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:36.358491    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:36.358491    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:36.358491    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:36.358491    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:36 GMT
	I0318 11:30:36.358491    4320 round_trippers.go:580]     Audit-Id: 50ce5fdc-3e0a-457e-91bd-7f5703a6ac23
	I0318 11:30:36.358491    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:36.362809    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-611000","namespace":"kube-system","uid":"0b98738c-76ab-4e2c-aa64-eadc49f8338a","resourceVersion":"497","creationTimestamp":"2024-03-18T11:28:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.196:2379","kubernetes.io/config.hash":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.mirror":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.seen":"2024-03-18T11:28:17.673580165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6446 chars]
	I0318 11:30:36.363677    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:36.363744    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:36.363744    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:36.363744    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:36.369489    4320 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:30:36.369489    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:36.369489    4320 round_trippers.go:580]     Audit-Id: d1f96da4-fe6a-4625-963f-0c981da2b929
	I0318 11:30:36.369489    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:36.369489    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:36.369489    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:36.369489    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:36.369489    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:36 GMT
	I0318 11:30:36.369489    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:36.860824    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/etcd-functional-611000
	I0318 11:30:36.860824    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:36.860824    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:36.860824    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:36.865104    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:36.865210    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:36.865291    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:36.865291    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:36.865291    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:36 GMT
	I0318 11:30:36.865291    4320 round_trippers.go:580]     Audit-Id: 2a5b9c96-be07-44e5-b7a2-607417d59a9f
	I0318 11:30:36.865291    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:36.865291    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:36.865291    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-611000","namespace":"kube-system","uid":"0b98738c-76ab-4e2c-aa64-eadc49f8338a","resourceVersion":"497","creationTimestamp":"2024-03-18T11:28:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.196:2379","kubernetes.io/config.hash":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.mirror":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.seen":"2024-03-18T11:28:17.673580165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6446 chars]
	I0318 11:30:36.866061    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:36.866061    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:36.866061    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:36.866061    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:36.866581    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:36.869286    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:36.869286    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:36.869286    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:36.869340    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:36 GMT
	I0318 11:30:36.869340    4320 round_trippers.go:580]     Audit-Id: 20b47761-8e64-46af-b812-cf183c8a7fe4
	I0318 11:30:36.869340    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:36.869340    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:36.869340    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:37.360523    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/etcd-functional-611000
	I0318 11:30:37.360523    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:37.360523    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:37.360523    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:37.361065    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:37.361065    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:37.364275    4320 round_trippers.go:580]     Audit-Id: 6f3b9acf-7b08-4cab-a4ca-f87dc0dbca6f
	I0318 11:30:37.364275    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:37.364275    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:37.364275    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:37.364275    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:37.364275    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:37 GMT
	I0318 11:30:37.364547    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-611000","namespace":"kube-system","uid":"0b98738c-76ab-4e2c-aa64-eadc49f8338a","resourceVersion":"497","creationTimestamp":"2024-03-18T11:28:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.196:2379","kubernetes.io/config.hash":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.mirror":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.seen":"2024-03-18T11:28:17.673580165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6446 chars]
	I0318 11:30:37.364695    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:37.364695    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:37.364695    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:37.364695    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:37.365466    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:37.365466    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:37.369094    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:37.369094    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:37 GMT
	I0318 11:30:37.369094    4320 round_trippers.go:580]     Audit-Id: e449eedc-7f00-403c-81e8-954f345e572d
	I0318 11:30:37.369094    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:37.369094    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:37.369094    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:37.369312    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:37.857536    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/etcd-functional-611000
	I0318 11:30:37.857536    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:37.857536    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:37.857536    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:37.858434    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:37.858434    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:37.858434    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:37.858434    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:37.858434    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:37 GMT
	I0318 11:30:37.858434    4320 round_trippers.go:580]     Audit-Id: aa7a96cc-db27-4465-a45d-6f6b503f3988
	I0318 11:30:37.858434    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:37.858434    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:37.861766    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-611000","namespace":"kube-system","uid":"0b98738c-76ab-4e2c-aa64-eadc49f8338a","resourceVersion":"497","creationTimestamp":"2024-03-18T11:28:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.196:2379","kubernetes.io/config.hash":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.mirror":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.seen":"2024-03-18T11:28:17.673580165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6446 chars]
	I0318 11:30:37.862029    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:37.862029    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:37.862029    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:37.862029    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:37.866950    4320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:30:37.866950    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:37.866950    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:37.866950    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:37.866950    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:37.866950    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:37.866950    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:37 GMT
	I0318 11:30:37.866950    4320 round_trippers.go:580]     Audit-Id: d5713ec1-df49-42a7-a325-8fcee95e99a0
	I0318 11:30:37.867864    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:37.868033    4320 pod_ready.go:102] pod "etcd-functional-611000" in "kube-system" namespace has status "Ready":"False"
	I0318 11:30:38.357437    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/etcd-functional-611000
	I0318 11:30:38.357437    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:38.357437    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:38.357437    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:38.358065    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:38.361694    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:38.361694    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:38.361694    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:38.361694    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:38.361694    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:38 GMT
	I0318 11:30:38.361694    4320 round_trippers.go:580]     Audit-Id: a530f7fa-2382-4680-bb9a-6658aae0f194
	I0318 11:30:38.361694    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:38.362108    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-611000","namespace":"kube-system","uid":"0b98738c-76ab-4e2c-aa64-eadc49f8338a","resourceVersion":"497","creationTimestamp":"2024-03-18T11:28:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.196:2379","kubernetes.io/config.hash":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.mirror":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.seen":"2024-03-18T11:28:17.673580165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6446 chars]
	I0318 11:30:38.362675    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:38.362675    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:38.362675    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:38.362675    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:38.366664    4320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:30:38.366664    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:38.367197    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:38.367197    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:38.367197    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:38.367197    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:38 GMT
	I0318 11:30:38.367197    4320 round_trippers.go:580]     Audit-Id: cc8ecaf8-a51f-4ff9-b324-2842c31ae560
	I0318 11:30:38.367332    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:38.367538    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:38.864652    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/etcd-functional-611000
	I0318 11:30:38.864652    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:38.864652    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:38.864652    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:38.868928    4320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:30:38.869000    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:38.869000    4320 round_trippers.go:580]     Audit-Id: f7a025dc-d6ef-40c1-819e-ce0e8cbac94a
	I0318 11:30:38.869000    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:38.869000    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:38.869000    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:38.869000    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:38.869000    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:38 GMT
	I0318 11:30:38.869000    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-611000","namespace":"kube-system","uid":"0b98738c-76ab-4e2c-aa64-eadc49f8338a","resourceVersion":"497","creationTimestamp":"2024-03-18T11:28:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.196:2379","kubernetes.io/config.hash":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.mirror":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.seen":"2024-03-18T11:28:17.673580165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6446 chars]
	I0318 11:30:38.869811    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:38.869811    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:38.869811    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:38.869811    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:38.873556    4320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:30:38.873622    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:38.873697    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:38.873697    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:38.873721    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:38 GMT
	I0318 11:30:38.873721    4320 round_trippers.go:580]     Audit-Id: 92cde28c-d435-42a4-bf4b-39a6e762977e
	I0318 11:30:38.873747    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:38.873775    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:38.873978    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:39.357025    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/etcd-functional-611000
	I0318 11:30:39.357025    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:39.357025    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:39.357025    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:39.366971    4320 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 11:30:39.366971    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:39.366971    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:39 GMT
	I0318 11:30:39.366971    4320 round_trippers.go:580]     Audit-Id: 2be4cdd1-6403-4690-bc95-5c1576d0259c
	I0318 11:30:39.366971    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:39.366971    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:39.366971    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:39.366971    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:39.366971    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-611000","namespace":"kube-system","uid":"0b98738c-76ab-4e2c-aa64-eadc49f8338a","resourceVersion":"497","creationTimestamp":"2024-03-18T11:28:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.196:2379","kubernetes.io/config.hash":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.mirror":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.seen":"2024-03-18T11:28:17.673580165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6446 chars]
	I0318 11:30:39.367586    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:39.367586    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:39.367586    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:39.367586    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:39.368902    4320 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:30:39.368902    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:39.372132    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:39.372195    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:39 GMT
	I0318 11:30:39.372195    4320 round_trippers.go:580]     Audit-Id: 9da0cc87-b3cd-4233-b1c7-0b756e7bbc64
	I0318 11:30:39.372195    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:39.372195    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:39.372195    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:39.372195    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:39.845219    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/etcd-functional-611000
	I0318 11:30:39.845219    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:39.845219    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:39.845219    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:39.846187    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:39.849131    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:39.849131    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:39.849131    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:39.849182    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:39.849182    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:39 GMT
	I0318 11:30:39.849182    4320 round_trippers.go:580]     Audit-Id: f79a01a8-651a-4338-8293-377585e6eebf
	I0318 11:30:39.849182    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:39.849528    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-611000","namespace":"kube-system","uid":"0b98738c-76ab-4e2c-aa64-eadc49f8338a","resourceVersion":"497","creationTimestamp":"2024-03-18T11:28:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.196:2379","kubernetes.io/config.hash":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.mirror":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.seen":"2024-03-18T11:28:17.673580165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6446 chars]
	I0318 11:30:39.849764    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:39.849764    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:39.849764    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:39.849764    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:39.850447    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:39.852810    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:39.852810    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:39.852810    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:39.852810    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:39 GMT
	I0318 11:30:39.852810    4320 round_trippers.go:580]     Audit-Id: 30437c3b-2b8d-4d3b-8aa7-df85bad69f2c
	I0318 11:30:39.852810    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:39.852810    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:39.853148    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:40.359338    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/etcd-functional-611000
	I0318 11:30:40.359338    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:40.359338    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:40.359338    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:40.385156    4320 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0318 11:30:40.385156    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:40.385156    4320 round_trippers.go:580]     Audit-Id: 0ea2dc32-d8df-4f5d-bd7b-749815eeea23
	I0318 11:30:40.385156    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:40.385156    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:40.385156    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:40.385156    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:40.385156    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:40 GMT
	I0318 11:30:40.385572    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-611000","namespace":"kube-system","uid":"0b98738c-76ab-4e2c-aa64-eadc49f8338a","resourceVersion":"497","creationTimestamp":"2024-03-18T11:28:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.196:2379","kubernetes.io/config.hash":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.mirror":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.seen":"2024-03-18T11:28:17.673580165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6446 chars]
	I0318 11:30:40.386166    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:40.386166    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:40.386166    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:40.386166    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:40.386784    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:40.386784    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:40.389865    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:40.389865    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:40 GMT
	I0318 11:30:40.389865    4320 round_trippers.go:580]     Audit-Id: f20483e1-fae4-45fd-9e31-2aeeef6ded7c
	I0318 11:30:40.389865    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:40.389865    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:40.389865    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:40.390224    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:40.390224    4320 pod_ready.go:102] pod "etcd-functional-611000" in "kube-system" namespace has status "Ready":"False"
	I0318 11:30:40.849080    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/etcd-functional-611000
	I0318 11:30:40.849178    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:40.849178    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:40.849178    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:40.853245    4320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:30:40.853245    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:40.853245    4320 round_trippers.go:580]     Audit-Id: 2107c472-553b-44fc-a121-2f541a7af5e3
	I0318 11:30:40.853245    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:40.853245    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:40.853245    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:40.853245    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:40.853245    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:40 GMT
	I0318 11:30:40.853245    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-611000","namespace":"kube-system","uid":"0b98738c-76ab-4e2c-aa64-eadc49f8338a","resourceVersion":"497","creationTimestamp":"2024-03-18T11:28:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.196:2379","kubernetes.io/config.hash":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.mirror":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.seen":"2024-03-18T11:28:17.673580165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6446 chars]
	I0318 11:30:40.854538    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:40.854538    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:40.854589    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:40.854589    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:40.854790    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:40.854790    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:40.854790    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:40 GMT
	I0318 11:30:40.858121    4320 round_trippers.go:580]     Audit-Id: a4830626-8fc8-4661-af1b-3fe58f5abc4c
	I0318 11:30:40.858121    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:40.858121    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:40.858121    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:40.858121    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:40.858255    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:41.360633    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/etcd-functional-611000
	I0318 11:30:41.360633    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:41.360633    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:41.360633    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:41.364928    4320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:30:41.364928    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:41.364928    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:41.364928    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:41.364928    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:41.364928    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:41 GMT
	I0318 11:30:41.364928    4320 round_trippers.go:580]     Audit-Id: e8b14845-9ca6-4ffb-9585-375337ccec9e
	I0318 11:30:41.364928    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:41.364928    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-611000","namespace":"kube-system","uid":"0b98738c-76ab-4e2c-aa64-eadc49f8338a","resourceVersion":"497","creationTimestamp":"2024-03-18T11:28:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.196:2379","kubernetes.io/config.hash":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.mirror":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.seen":"2024-03-18T11:28:17.673580165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6446 chars]
	I0318 11:30:41.365881    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:41.365881    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:41.365881    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:41.365881    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:41.366429    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:41.366429    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:41.366429    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:41.366429    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:41.366429    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:41 GMT
	I0318 11:30:41.366429    4320 round_trippers.go:580]     Audit-Id: 6e0cf006-b105-4386-96c4-df6ae718ebb8
	I0318 11:30:41.366429    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:41.366429    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:41.369141    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:41.854605    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/etcd-functional-611000
	I0318 11:30:41.854605    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:41.854605    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:41.854605    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:41.858435    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:41.858435    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:41.858435    4320 round_trippers.go:580]     Audit-Id: 016e0d41-cc83-4a7f-b3cf-ab9363a41a21
	I0318 11:30:41.858435    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:41.858435    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:41.858435    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:41.858435    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:41.858435    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:41 GMT
	I0318 11:30:41.858435    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-611000","namespace":"kube-system","uid":"0b98738c-76ab-4e2c-aa64-eadc49f8338a","resourceVersion":"497","creationTimestamp":"2024-03-18T11:28:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.196:2379","kubernetes.io/config.hash":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.mirror":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.seen":"2024-03-18T11:28:17.673580165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6446 chars]
	I0318 11:30:41.859305    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:41.859305    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:41.859305    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:41.859305    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:41.859827    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:41.859827    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:41.859827    4320 round_trippers.go:580]     Audit-Id: 25665b17-434e-4730-ba05-a8926c359b1d
	I0318 11:30:41.859827    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:41.862392    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:41.862392    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:41.862392    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:41.862392    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:41 GMT
	I0318 11:30:41.862739    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:42.353750    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/etcd-functional-611000
	I0318 11:30:42.353824    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:42.353824    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:42.353824    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:42.357639    4320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:30:42.357722    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:42.357818    4320 round_trippers.go:580]     Audit-Id: 4fe3b112-38ca-4b97-a910-824053f50d0d
	I0318 11:30:42.357818    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:42.357818    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:42.357893    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:42.357918    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:42.357918    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:42 GMT
	I0318 11:30:42.358163    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-611000","namespace":"kube-system","uid":"0b98738c-76ab-4e2c-aa64-eadc49f8338a","resourceVersion":"497","creationTimestamp":"2024-03-18T11:28:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.196:2379","kubernetes.io/config.hash":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.mirror":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.seen":"2024-03-18T11:28:17.673580165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6446 chars]
	I0318 11:30:42.358742    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:42.358742    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:42.358742    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:42.358742    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:42.362152    4320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:30:42.362675    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:42.362675    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:42 GMT
	I0318 11:30:42.362675    4320 round_trippers.go:580]     Audit-Id: 94cbec86-cbcc-4c1f-89ab-7c96852d0932
	I0318 11:30:42.362734    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:42.362734    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:42.362734    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:42.362734    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:42.362997    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:42.856306    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/etcd-functional-611000
	I0318 11:30:42.856364    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:42.856364    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:42.856364    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:42.859596    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:42.859672    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:42.859672    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:42.859672    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:42.859672    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:42.859672    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:42.859672    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:42 GMT
	I0318 11:30:42.859672    4320 round_trippers.go:580]     Audit-Id: 78b6238c-7448-4fb6-8570-af31dfc62943
	I0318 11:30:42.859672    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-611000","namespace":"kube-system","uid":"0b98738c-76ab-4e2c-aa64-eadc49f8338a","resourceVersion":"573","creationTimestamp":"2024-03-18T11:28:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.196:2379","kubernetes.io/config.hash":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.mirror":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.seen":"2024-03-18T11:28:17.673580165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6222 chars]
	I0318 11:30:42.860403    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:42.860403    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:42.860403    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:42.860403    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:42.864912    4320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:30:42.864912    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:42.864912    4320 round_trippers.go:580]     Audit-Id: a275f93b-060e-497e-979d-11fc547fce79
	I0318 11:30:42.864912    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:42.864912    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:42.864912    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:42.864912    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:42.864912    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:42 GMT
	I0318 11:30:42.865594    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:42.865594    4320 pod_ready.go:92] pod "etcd-functional-611000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:30:42.865594    4320 pod_ready.go:81] duration metric: took 9.0209685s for pod "etcd-functional-611000" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:42.865594    4320 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-611000" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:42.866146    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-611000
	I0318 11:30:42.866146    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:42.866146    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:42.866146    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:42.868992    4320 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:30:42.869546    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:42.869546    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:42 GMT
	I0318 11:30:42.869546    4320 round_trippers.go:580]     Audit-Id: 03eecfce-ae24-436c-a15d-19fc767ef4ef
	I0318 11:30:42.869546    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:42.869546    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:42.869546    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:42.869546    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:42.869546    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-611000","namespace":"kube-system","uid":"f4f6d73c-451d-4848-a7e4-0f9097554e64","resourceVersion":"562","creationTimestamp":"2024-03-18T11:28:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.30.129.196:8441","kubernetes.io/config.hash":"314bb12752b137cfc7117682d46b05ed","kubernetes.io/config.mirror":"314bb12752b137cfc7117682d46b05ed","kubernetes.io/config.seen":"2024-03-18T11:28:26.492942527Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 7756 chars]
	I0318 11:30:42.870104    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:42.870104    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:42.870104    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:42.870104    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:42.873575    4320 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:30:42.873575    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:42.873575    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:42 GMT
	I0318 11:30:42.873575    4320 round_trippers.go:580]     Audit-Id: 175f054a-7086-4ccf-92ea-cbd17b8e779e
	I0318 11:30:42.873575    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:42.873575    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:42.873575    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:42.873575    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:42.875339    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:42.875369    4320 pod_ready.go:92] pod "kube-apiserver-functional-611000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:30:42.875369    4320 pod_ready.go:81] duration metric: took 9.7756ms for pod "kube-apiserver-functional-611000" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:42.875369    4320 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-611000" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:42.875369    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-611000
	I0318 11:30:42.875369    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:42.875369    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:42.875369    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:42.883645    4320 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 11:30:42.883645    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:42.883645    4320 round_trippers.go:580]     Audit-Id: ea16fad4-cbf4-4a85-b18a-855e62d89c09
	I0318 11:30:42.883645    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:42.883645    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:42.883645    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:42.883645    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:42.883645    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:42 GMT
	I0318 11:30:42.884319    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-611000","namespace":"kube-system","uid":"36a3f019-501a-46c9-9431-24e94f6c0ea9","resourceVersion":"565","creationTimestamp":"2024-03-18T11:28:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ec6a462cf2733683b1a6364ff35ac701","kubernetes.io/config.mirror":"ec6a462cf2733683b1a6364ff35ac701","kubernetes.io/config.seen":"2024-03-18T11:28:26.492943927Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7433 chars]
	I0318 11:30:42.884898    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:42.884957    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:42.884957    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:42.884957    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:42.885260    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:42.885260    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:42.885260    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:42.888448    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:42 GMT
	I0318 11:30:42.888448    4320 round_trippers.go:580]     Audit-Id: c8919b13-a5d7-4558-8833-bbf4fdb95146
	I0318 11:30:42.888448    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:42.888448    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:42.888488    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:42.888665    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:42.888665    4320 pod_ready.go:92] pod "kube-controller-manager-functional-611000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:30:42.888665    4320 pod_ready.go:81] duration metric: took 13.2959ms for pod "kube-controller-manager-functional-611000" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:42.888665    4320 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sh9ps" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:42.888665    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/kube-proxy-sh9ps
	I0318 11:30:42.888665    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:42.889263    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:42.889263    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:42.893585    4320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:30:42.893732    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:42.893794    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:42.893830    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:42 GMT
	I0318 11:30:42.893830    4320 round_trippers.go:580]     Audit-Id: 9bbd1639-bc3a-474b-98b9-81e0232040c9
	I0318 11:30:42.893830    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:42.893830    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:42.893830    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:42.896054    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sh9ps","generateName":"kube-proxy-","namespace":"kube-system","uid":"da5ff102-6ce0-4f7c-bd28-923b0dbd135f","resourceVersion":"506","creationTimestamp":"2024-03-18T11:28:38Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d17d45d8-ab63-45f7-87a3-522ec56c97bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d17d45d8-ab63-45f7-87a3-522ec56c97bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5884 chars]
	I0318 11:30:42.896282    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:42.896282    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:42.896282    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:42.896282    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:42.901042    4320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:30:42.901042    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:42.901111    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:42.901111    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:42 GMT
	I0318 11:30:42.901143    4320 round_trippers.go:580]     Audit-Id: 79f36d60-c98e-4bb2-a841-81975aa027d6
	I0318 11:30:42.901143    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:42.901175    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:42.901175    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:42.901175    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:42.901710    4320 pod_ready.go:92] pod "kube-proxy-sh9ps" in "kube-system" namespace has status "Ready":"True"
	I0318 11:30:42.901810    4320 pod_ready.go:81] duration metric: took 13.1447ms for pod "kube-proxy-sh9ps" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:42.901810    4320 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-611000" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:42.901810    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-611000
	I0318 11:30:42.901810    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:42.901810    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:42.901810    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:42.908629    4320 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:30:42.908857    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:42.908857    4320 round_trippers.go:580]     Audit-Id: 59529443-eeac-44c9-b4dc-d551b1fb09a0
	I0318 11:30:42.908891    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:42.908891    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:42.908891    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:42.908891    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:42.908891    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:42 GMT
	I0318 11:30:42.909132    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-611000","namespace":"kube-system","uid":"460cc842-8959-4a52-9ebf-79401a8fd0eb","resourceVersion":"561","creationTimestamp":"2024-03-18T11:28:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e6f657f0826e03a27b1cb755d0484b8b","kubernetes.io/config.mirror":"e6f657f0826e03a27b1cb755d0484b8b","kubernetes.io/config.seen":"2024-03-18T11:28:26.492995520Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 4911 chars]
	I0318 11:30:42.909255    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:42.909255    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:42.909255    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:42.909255    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:42.911165    4320 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:30:42.911165    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:42.911165    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:42 GMT
	I0318 11:30:42.911165    4320 round_trippers.go:580]     Audit-Id: 74228941-cc84-44b3-9ae4-1f9f57d63ed0
	I0318 11:30:42.911165    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:42.911165    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:42.911165    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:42.911165    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:42.914197    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:42.914526    4320 pod_ready.go:92] pod "kube-scheduler-functional-611000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:30:42.914526    4320 pod_ready.go:81] duration metric: took 12.7162ms for pod "kube-scheduler-functional-611000" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:42.914526    4320 pod_ready.go:38] duration metric: took 13.0884386s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:30:42.914526    4320 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 11:30:42.944105    4320 command_runner.go:130] > -16
	I0318 11:30:42.944289    4320 ops.go:34] apiserver oom_adj: -16
	I0318 11:30:42.944345    4320 kubeadm.go:591] duration metric: took 22.9821748s to restartPrimaryControlPlane
	I0318 11:30:42.944345    4320 kubeadm.go:393] duration metric: took 23.080747s to StartCluster
	I0318 11:30:42.944407    4320 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:30:42.944779    4320 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 11:30:42.946382    4320 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:30:42.948810    4320 start.go:234] Will wait 6m0s for node &{Name: IP:172.30.129.196 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:30:42.948810    4320 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 11:30:42.952748    4320 out.go:177] * Verifying Kubernetes components...
	I0318 11:30:42.948810    4320 addons.go:69] Setting storage-provisioner=true in profile "functional-611000"
	I0318 11:30:42.952748    4320 addons.go:234] Setting addon storage-provisioner=true in "functional-611000"
	W0318 11:30:42.957138    4320 addons.go:243] addon storage-provisioner should already be in state true
	I0318 11:30:42.948810    4320 config.go:182] Loaded profile config "functional-611000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:30:42.957138    4320 host.go:66] Checking if "functional-611000" exists ...
	I0318 11:30:42.948810    4320 addons.go:69] Setting default-storageclass=true in profile "functional-611000"
	I0318 11:30:42.957138    4320 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-611000"
	I0318 11:30:42.957849    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
	I0318 11:30:42.958551    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
	I0318 11:30:42.971785    4320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:30:43.221507    4320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:30:43.248886    4320 node_ready.go:35] waiting up to 6m0s for node "functional-611000" to be "Ready" ...
	I0318 11:30:43.248971    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:43.248971    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:43.248971    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:43.248971    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:43.250905    4320 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:30:43.250905    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:43.250905    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:43.250905    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:43.250905    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:43.250905    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:43 GMT
	I0318 11:30:43.250905    4320 round_trippers.go:580]     Audit-Id: ed7d875d-9bfc-45ff-93dd-3788878ea254
	I0318 11:30:43.250905    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:43.253881    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:43.254486    4320 node_ready.go:49] node "functional-611000" has status "Ready":"True"
	I0318 11:30:43.254486    4320 node_ready.go:38] duration metric: took 5.5154ms for node "functional-611000" to be "Ready" ...
	I0318 11:30:43.254486    4320 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:30:43.272570    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods
	I0318 11:30:43.272843    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:43.272843    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:43.272843    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:43.273099    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:43.277571    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:43.277571    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:43.277571    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:43 GMT
	I0318 11:30:43.277571    4320 round_trippers.go:580]     Audit-Id: fa5ef5d7-706b-4a2b-a53d-a59a98a387fb
	I0318 11:30:43.277571    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:43.277571    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:43.277664    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:43.278715    4320 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"573"},"items":[{"metadata":{"name":"coredns-5dd5756b68-sxq4l","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ebb5db0-da82-4fc4-bc0e-3e680e05723f","resourceVersion":"558","creationTimestamp":"2024-03-18T11:28:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1c32b30a-52a3-480f-9cf3-8d149b086fd4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c32b30a-52a3-480f-9cf3-8d149b086fd4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 48873 chars]
	I0318 11:30:43.281301    4320 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sxq4l" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:43.463673    4320 request.go:629] Waited for 182.3705ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sxq4l
	I0318 11:30:43.463963    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sxq4l
	I0318 11:30:43.463963    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:43.463963    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:43.463963    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:43.469213    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:43.469292    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:43.469292    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:43.469292    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:43.469292    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:43 GMT
	I0318 11:30:43.469292    4320 round_trippers.go:580]     Audit-Id: 79b4c9bf-b107-4f52-a951-4a4395addbac
	I0318 11:30:43.469292    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:43.469292    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:43.469292    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sxq4l","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ebb5db0-da82-4fc4-bc0e-3e680e05723f","resourceVersion":"558","creationTimestamp":"2024-03-18T11:28:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1c32b30a-52a3-480f-9cf3-8d149b086fd4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c32b30a-52a3-480f-9cf3-8d149b086fd4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6156 chars]
	I0318 11:30:43.671425    4320 request.go:629] Waited for 201.1773ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:43.671577    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:43.671577    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:43.671577    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:43.671577    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:43.672319    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:43.675877    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:43.675877    4320 round_trippers.go:580]     Audit-Id: d363551f-05d4-4fcd-b195-a7144d39c60f
	I0318 11:30:43.675877    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:43.675877    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:43.675877    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:43.675877    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:43.675877    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:43 GMT
	I0318 11:30:43.676196    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:43.676865    4320 pod_ready.go:92] pod "coredns-5dd5756b68-sxq4l" in "kube-system" namespace has status "Ready":"True"
	I0318 11:30:43.676865    4320 pod_ready.go:81] duration metric: took 395.5612ms for pod "coredns-5dd5756b68-sxq4l" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:43.676865    4320 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-611000" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:43.865247    4320 request.go:629] Waited for 188.2143ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/etcd-functional-611000
	I0318 11:30:43.865402    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/etcd-functional-611000
	I0318 11:30:43.865402    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:43.865402    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:43.865402    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:43.866090    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:43.869321    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:43.869390    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:43.869390    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:43.869390    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:43.869390    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:43 GMT
	I0318 11:30:43.869390    4320 round_trippers.go:580]     Audit-Id: 6d85f79e-7e97-4eea-a58e-6a6851430f30
	I0318 11:30:43.869390    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:43.869390    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-611000","namespace":"kube-system","uid":"0b98738c-76ab-4e2c-aa64-eadc49f8338a","resourceVersion":"573","creationTimestamp":"2024-03-18T11:28:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.196:2379","kubernetes.io/config.hash":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.mirror":"207864df90caaf3c6513f9904158da66","kubernetes.io/config.seen":"2024-03-18T11:28:17.673580165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6222 chars]
	I0318 11:30:44.060413    4320 request.go:629] Waited for 190.2819ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:44.060641    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:44.060641    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:44.060641    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:44.060641    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:44.069974    4320 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 11:30:44.069974    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:44.069974    4320 round_trippers.go:580]     Audit-Id: bc3990d4-9575-49c5-b382-313e93c2736b
	I0318 11:30:44.069974    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:44.069974    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:44.069974    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:44.069974    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:44.069974    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:44 GMT
	I0318 11:30:44.070546    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:44.071000    4320 pod_ready.go:92] pod "etcd-functional-611000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:30:44.071000    4320 pod_ready.go:81] duration metric: took 394.1325ms for pod "etcd-functional-611000" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:44.071000    4320 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-611000" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:44.266822    4320 request.go:629] Waited for 195.8198ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-611000
	I0318 11:30:44.267061    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-611000
	I0318 11:30:44.267061    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:44.267061    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:44.267061    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:44.268249    4320 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:30:44.271376    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:44.271376    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:44.271376    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:44 GMT
	I0318 11:30:44.271376    4320 round_trippers.go:580]     Audit-Id: 197972a0-251c-4799-9289-eb50f49d20e0
	I0318 11:30:44.271376    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:44.271376    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:44.271376    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:44.271741    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-611000","namespace":"kube-system","uid":"f4f6d73c-451d-4848-a7e4-0f9097554e64","resourceVersion":"562","creationTimestamp":"2024-03-18T11:28:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.30.129.196:8441","kubernetes.io/config.hash":"314bb12752b137cfc7117682d46b05ed","kubernetes.io/config.mirror":"314bb12752b137cfc7117682d46b05ed","kubernetes.io/config.seen":"2024-03-18T11:28:26.492942527Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 7756 chars]
	I0318 11:30:44.471955    4320 request.go:629] Waited for 199.4519ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:44.471955    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:44.471955    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:44.471955    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:44.471955    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:44.472691    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:44.472691    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:44.476437    4320 round_trippers.go:580]     Audit-Id: dd43b4ef-b6e8-4ceb-9000-09d09df71bd2
	I0318 11:30:44.476437    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:44.476437    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:44.476437    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:44.476437    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:44.476437    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:44 GMT
	I0318 11:30:44.476661    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:44.477249    4320 pod_ready.go:92] pod "kube-apiserver-functional-611000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:30:44.477329    4320 pod_ready.go:81] duration metric: took 406.2461ms for pod "kube-apiserver-functional-611000" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:44.477329    4320 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-611000" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:44.665880    4320 request.go:629] Waited for 188.2621ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-611000
	I0318 11:30:44.666172    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-611000
	I0318 11:30:44.666172    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:44.666172    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:44.666172    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:44.666878    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:44.666878    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:44.670271    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:44.670271    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:44 GMT
	I0318 11:30:44.670271    4320 round_trippers.go:580]     Audit-Id: 0ce7db06-3963-48a5-ad4f-7c1f8bd07177
	I0318 11:30:44.670271    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:44.670271    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:44.670271    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:44.671034    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-611000","namespace":"kube-system","uid":"36a3f019-501a-46c9-9431-24e94f6c0ea9","resourceVersion":"565","creationTimestamp":"2024-03-18T11:28:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ec6a462cf2733683b1a6364ff35ac701","kubernetes.io/config.mirror":"ec6a462cf2733683b1a6364ff35ac701","kubernetes.io/config.seen":"2024-03-18T11:28:26.492943927Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7433 chars]
	I0318 11:30:44.857767    4320 request.go:629] Waited for 185.8086ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:44.857937    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:44.857937    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:44.858021    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:44.858043    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:44.858734    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:44.858734    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:44.858734    4320 round_trippers.go:580]     Audit-Id: cbdf72d6-3283-4ee3-8b5a-680174f720bd
	I0318 11:30:44.858734    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:44.858734    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:44.862021    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:44.862021    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:44.862021    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:44 GMT
	I0318 11:30:44.862296    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:44.862766    4320 pod_ready.go:92] pod "kube-controller-manager-functional-611000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:30:44.862828    4320 pod_ready.go:81] duration metric: took 385.4339ms for pod "kube-controller-manager-functional-611000" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:44.862911    4320 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sh9ps" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:45.004823    4320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:30:45.017115    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:30:45.004823    4320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:30:45.017247    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:30:45.020651    4320 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 11:30:45.017671    4320 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 11:30:45.021957    4320 kapi.go:59] client config for functional-611000: &rest.Config{Host:"https://172.30.129.196:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-611000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-611000\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e8b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 11:30:45.023206    4320 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 11:30:45.023206    4320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 11:30:45.024012    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
	I0318 11:30:45.024779    4320 addons.go:234] Setting addon default-storageclass=true in "functional-611000"
	W0318 11:30:45.024779    4320 addons.go:243] addon default-storageclass should already be in state true
	I0318 11:30:45.025598    4320 host.go:66] Checking if "functional-611000" exists ...
	I0318 11:30:45.026609    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
	I0318 11:30:45.071395    4320 request.go:629] Waited for 208.2495ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/kube-proxy-sh9ps
	I0318 11:30:45.071395    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/kube-proxy-sh9ps
	I0318 11:30:45.071395    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:45.071395    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:45.071395    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:45.071798    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:45.071798    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:45.075811    4320 round_trippers.go:580]     Audit-Id: 32a9180f-154b-4a75-8a14-3bccffb3207d
	I0318 11:30:45.075811    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:45.075811    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:45.075811    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:45.075811    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:45.075811    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:45 GMT
	I0318 11:30:45.076940    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sh9ps","generateName":"kube-proxy-","namespace":"kube-system","uid":"da5ff102-6ce0-4f7c-bd28-923b0dbd135f","resourceVersion":"506","creationTimestamp":"2024-03-18T11:28:38Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d17d45d8-ab63-45f7-87a3-522ec56c97bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d17d45d8-ab63-45f7-87a3-522ec56c97bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5884 chars]
	I0318 11:30:45.265945    4320 request.go:629] Waited for 188.4032ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:45.266156    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:45.266156    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:45.266156    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:45.266156    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:45.271017    4320 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:30:45.271191    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:45.271191    4320 round_trippers.go:580]     Audit-Id: 746172b4-0432-49fd-812c-3cb6395141af
	I0318 11:30:45.271222    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:45.271222    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:45.271222    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:45.271222    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:45.271222    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:45 GMT
	I0318 11:30:45.271381    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:45.272055    4320 pod_ready.go:92] pod "kube-proxy-sh9ps" in "kube-system" namespace has status "Ready":"True"
	I0318 11:30:45.272117    4320 pod_ready.go:81] duration metric: took 409.1755ms for pod "kube-proxy-sh9ps" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:45.272117    4320 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-611000" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:45.461864    4320 request.go:629] Waited for 189.4613ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-611000
	I0318 11:30:45.461943    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-611000
	I0318 11:30:45.462073    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:45.462073    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:45.462073    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:45.463021    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:45.466846    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:45.466846    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:45.466946    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:45.466946    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:45 GMT
	I0318 11:30:45.466946    4320 round_trippers.go:580]     Audit-Id: 389b29f3-885c-44d8-8fab-bcfe27e7c34d
	I0318 11:30:45.466946    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:45.467050    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:45.467363    4320 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-611000","namespace":"kube-system","uid":"460cc842-8959-4a52-9ebf-79401a8fd0eb","resourceVersion":"561","creationTimestamp":"2024-03-18T11:28:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e6f657f0826e03a27b1cb755d0484b8b","kubernetes.io/config.mirror":"e6f657f0826e03a27b1cb755d0484b8b","kubernetes.io/config.seen":"2024-03-18T11:28:26.492995520Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 4911 chars]
	I0318 11:30:45.657156    4320 request.go:629] Waited for 188.7645ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:45.657378    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes/functional-611000
	I0318 11:30:45.657378    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:45.657378    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:45.657378    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:45.658029    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:45.661236    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:45.661236    4320 round_trippers.go:580]     Audit-Id: 179fbc8e-bc77-45cf-8c13-c1a82602cbd7
	I0318 11:30:45.661236    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:45.661236    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:45.661236    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:45.661236    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:45.661236    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:45 GMT
	I0318 11:30:45.661546    4320 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-18T11:28:22Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0318 11:30:45.661546    4320 pod_ready.go:92] pod "kube-scheduler-functional-611000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:30:45.661546    4320 pod_ready.go:81] duration metric: took 389.4265ms for pod "kube-scheduler-functional-611000" in "kube-system" namespace to be "Ready" ...
	I0318 11:30:45.661546    4320 pod_ready.go:38] duration metric: took 2.4070423s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:30:45.661546    4320 api_server.go:52] waiting for apiserver process to appear ...
	I0318 11:30:45.674102    4320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 11:30:45.696416    4320 command_runner.go:130] > 6872
	I0318 11:30:45.696509    4320 api_server.go:72] duration metric: took 2.7475863s to wait for apiserver process to appear ...
	I0318 11:30:45.696509    4320 api_server.go:88] waiting for apiserver healthz status ...
	I0318 11:30:45.696509    4320 api_server.go:253] Checking apiserver healthz at https://172.30.129.196:8441/healthz ...
	I0318 11:30:45.703461    4320 api_server.go:279] https://172.30.129.196:8441/healthz returned 200:
	ok
	I0318 11:30:45.704663    4320 round_trippers.go:463] GET https://172.30.129.196:8441/version
	I0318 11:30:45.704710    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:45.704710    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:45.704753    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:45.705472    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:45.707109    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:45.707177    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:45.707177    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:45.707177    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:45.707177    4320 round_trippers.go:580]     Content-Length: 264
	I0318 11:30:45.707177    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:45 GMT
	I0318 11:30:45.707177    4320 round_trippers.go:580]     Audit-Id: 66390ff8-e7ae-44ad-8673-34163914c8b6
	I0318 11:30:45.707177    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:45.707177    4320 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0318 11:30:45.707177    4320 api_server.go:141] control plane version: v1.28.4
	I0318 11:30:45.707177    4320 api_server.go:131] duration metric: took 10.6677ms to wait for apiserver health ...
	I0318 11:30:45.707177    4320 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 11:30:45.864856    4320 request.go:629] Waited for 157.531ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods
	I0318 11:30:45.864914    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods
	I0318 11:30:45.864914    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:45.864914    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:45.864914    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:45.869769    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:45.869769    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:45.869769    4320 round_trippers.go:580]     Audit-Id: 922f1d8d-e450-43bc-825e-20cd76a20320
	I0318 11:30:45.869769    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:45.869769    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:45.869769    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:45.869769    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:45.869889    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:45 GMT
	I0318 11:30:45.870839    4320 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"573"},"items":[{"metadata":{"name":"coredns-5dd5756b68-sxq4l","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ebb5db0-da82-4fc4-bc0e-3e680e05723f","resourceVersion":"558","creationTimestamp":"2024-03-18T11:28:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1c32b30a-52a3-480f-9cf3-8d149b086fd4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c32b30a-52a3-480f-9cf3-8d149b086fd4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 48873 chars]
	I0318 11:30:45.873198    4320 system_pods.go:59] 7 kube-system pods found
	I0318 11:30:45.873734    4320 system_pods.go:61] "coredns-5dd5756b68-sxq4l" [5ebb5db0-da82-4fc4-bc0e-3e680e05723f] Running
	I0318 11:30:45.873734    4320 system_pods.go:61] "etcd-functional-611000" [0b98738c-76ab-4e2c-aa64-eadc49f8338a] Running
	I0318 11:30:45.873734    4320 system_pods.go:61] "kube-apiserver-functional-611000" [f4f6d73c-451d-4848-a7e4-0f9097554e64] Running
	I0318 11:30:45.873804    4320 system_pods.go:61] "kube-controller-manager-functional-611000" [36a3f019-501a-46c9-9431-24e94f6c0ea9] Running
	I0318 11:30:45.873804    4320 system_pods.go:61] "kube-proxy-sh9ps" [da5ff102-6ce0-4f7c-bd28-923b0dbd135f] Running
	I0318 11:30:45.873804    4320 system_pods.go:61] "kube-scheduler-functional-611000" [460cc842-8959-4a52-9ebf-79401a8fd0eb] Running
	I0318 11:30:45.873804    4320 system_pods.go:61] "storage-provisioner" [ea9b5188-5461-42f3-af2a-1c0652841cdd] Running
	I0318 11:30:45.873804    4320 system_pods.go:74] duration metric: took 166.6255ms to wait for pod list to return data ...
	I0318 11:30:45.873804    4320 default_sa.go:34] waiting for default service account to be created ...
	I0318 11:30:46.057640    4320 request.go:629] Waited for 183.6456ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.196:8441/api/v1/namespaces/default/serviceaccounts
	I0318 11:30:46.057755    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/default/serviceaccounts
	I0318 11:30:46.057879    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:46.057879    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:46.057879    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:46.058642    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:46.062067    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:46.062067    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:46.062067    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:46.062067    4320 round_trippers.go:580]     Content-Length: 261
	I0318 11:30:46.062067    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:46 GMT
	I0318 11:30:46.062067    4320 round_trippers.go:580]     Audit-Id: 91d625a1-0e41-4f5e-98b7-dc77d1ae3f43
	I0318 11:30:46.062067    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:46.062067    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:46.062067    4320 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"573"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"905e8a29-b893-460a-9543-b43a09687d77","resourceVersion":"341","creationTimestamp":"2024-03-18T11:28:38Z"}}]}
	I0318 11:30:46.062373    4320 default_sa.go:45] found service account: "default"
	I0318 11:30:46.062373    4320 default_sa.go:55] duration metric: took 188.5675ms for default service account to be created ...
	I0318 11:30:46.062373    4320 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 11:30:46.266462    4320 request.go:629] Waited for 203.7124ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods
	I0318 11:30:46.266593    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/namespaces/kube-system/pods
	I0318 11:30:46.266593    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:46.266593    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:46.266593    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:46.267268    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:46.267268    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:46.272150    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:46.272150    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:46.272150    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:46.272150    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:46.272150    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:46 GMT
	I0318 11:30:46.272150    4320 round_trippers.go:580]     Audit-Id: 1f69ca36-7926-4654-bf69-139a560ec62d
	I0318 11:30:46.272425    4320 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"573"},"items":[{"metadata":{"name":"coredns-5dd5756b68-sxq4l","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5ebb5db0-da82-4fc4-bc0e-3e680e05723f","resourceVersion":"558","creationTimestamp":"2024-03-18T11:28:39Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"1c32b30a-52a3-480f-9cf3-8d149b086fd4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T11:28:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1c32b30a-52a3-480f-9cf3-8d149b086fd4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 48873 chars]
	I0318 11:30:46.274930    4320 system_pods.go:86] 7 kube-system pods found
	I0318 11:30:46.274930    4320 system_pods.go:89] "coredns-5dd5756b68-sxq4l" [5ebb5db0-da82-4fc4-bc0e-3e680e05723f] Running
	I0318 11:30:46.274930    4320 system_pods.go:89] "etcd-functional-611000" [0b98738c-76ab-4e2c-aa64-eadc49f8338a] Running
	I0318 11:30:46.274930    4320 system_pods.go:89] "kube-apiserver-functional-611000" [f4f6d73c-451d-4848-a7e4-0f9097554e64] Running
	I0318 11:30:46.274930    4320 system_pods.go:89] "kube-controller-manager-functional-611000" [36a3f019-501a-46c9-9431-24e94f6c0ea9] Running
	I0318 11:30:46.274930    4320 system_pods.go:89] "kube-proxy-sh9ps" [da5ff102-6ce0-4f7c-bd28-923b0dbd135f] Running
	I0318 11:30:46.274930    4320 system_pods.go:89] "kube-scheduler-functional-611000" [460cc842-8959-4a52-9ebf-79401a8fd0eb] Running
	I0318 11:30:46.274930    4320 system_pods.go:89] "storage-provisioner" [ea9b5188-5461-42f3-af2a-1c0652841cdd] Running
	I0318 11:30:46.274930    4320 system_pods.go:126] duration metric: took 212.5553ms to wait for k8s-apps to be running ...
	I0318 11:30:46.274930    4320 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 11:30:46.287643    4320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 11:30:46.311347    4320 system_svc.go:56] duration metric: took 36.4174ms WaitForService to wait for kubelet
	I0318 11:30:46.311415    4320 kubeadm.go:576] duration metric: took 3.3625126s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 11:30:46.311415    4320 node_conditions.go:102] verifying NodePressure condition ...
	I0318 11:30:46.461685    4320 request.go:629] Waited for 150.1187ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.196:8441/api/v1/nodes
	I0318 11:30:46.461937    4320 round_trippers.go:463] GET https://172.30.129.196:8441/api/v1/nodes
	I0318 11:30:46.461937    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:46.461937    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:46.462009    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:46.462260    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:46.466198    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:46.466304    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:46.466417    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:46.466585    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:46.466745    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:46.466862    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:46 GMT
	I0318 11:30:46.466968    4320 round_trippers.go:580]     Audit-Id: 82b4d9b7-1d84-4879-bc0e-9d1cbcb85a00
	I0318 11:30:46.467059    4320 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"573"},"items":[{"metadata":{"name":"functional-611000","uid":"5a1dbdd1-bab4-4304-98de-87f701ddf290","resourceVersion":"490","creationTimestamp":"2024-03-18T11:28:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-611000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"functional-611000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T11_28_26_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4840 chars]
	I0318 11:30:46.467659    4320 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:30:46.467659    4320 node_conditions.go:123] node cpu capacity is 2
	I0318 11:30:46.467659    4320 node_conditions.go:105] duration metric: took 156.2423ms to run NodePressure ...
	I0318 11:30:46.467659    4320 start.go:240] waiting for startup goroutines ...
	I0318 11:30:47.093363    4320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:30:47.093363    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:30:47.093363    4320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:30:47.105401    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:30:47.105401    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:30:47.105570    4320 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 11:30:47.105614    4320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 11:30:47.105711    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
	I0318 11:30:49.175606    4320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:30:49.175817    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:30:49.175893    4320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:30:49.564146    4320 main.go:141] libmachine: [stdout =====>] : 172.30.129.196
	
	I0318 11:30:49.564212    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:30:49.564212    4320 sshutil.go:53] new ssh client: &{IP:172.30.129.196 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-611000\id_rsa Username:docker}
	I0318 11:30:49.689285    4320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 11:30:50.584647    4320 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0318 11:30:50.584722    4320 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0318 11:30:50.584768    4320 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0318 11:30:50.584768    4320 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0318 11:30:50.584803    4320 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0318 11:30:50.584803    4320 command_runner.go:130] > pod/storage-provisioner configured
	I0318 11:30:51.533644    4320 main.go:141] libmachine: [stdout =====>] : 172.30.129.196
	
	I0318 11:30:51.533644    4320 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:30:51.544545    4320 sshutil.go:53] new ssh client: &{IP:172.30.129.196 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-611000\id_rsa Username:docker}
	I0318 11:30:51.671055    4320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 11:30:51.859626    4320 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0318 11:30:51.860044    4320 round_trippers.go:463] GET https://172.30.129.196:8441/apis/storage.k8s.io/v1/storageclasses
	I0318 11:30:51.860126    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:51.860157    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:51.860220    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:51.861003    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:51.863564    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:51.863603    4320 round_trippers.go:580]     Audit-Id: 787a1d24-5411-4a89-9c14-e3da13978fe1
	I0318 11:30:51.863603    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:51.863603    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:51.863603    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:51.863603    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:51.863603    4320 round_trippers.go:580]     Content-Length: 1273
	I0318 11:30:51.863603    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:51 GMT
	I0318 11:30:51.863676    4320 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"580"},"items":[{"metadata":{"name":"standard","uid":"768be2fd-84ba-4a97-8089-49b62e6a039f","resourceVersion":"428","creationTimestamp":"2024-03-18T11:28:48Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-18T11:28:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0318 11:30:51.864370    4320 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"768be2fd-84ba-4a97-8089-49b62e6a039f","resourceVersion":"428","creationTimestamp":"2024-03-18T11:28:48Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-18T11:28:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0318 11:30:51.864370    4320 round_trippers.go:463] PUT https://172.30.129.196:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0318 11:30:51.864370    4320 round_trippers.go:469] Request Headers:
	I0318 11:30:51.864370    4320 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:30:51.864370    4320 round_trippers.go:473]     Content-Type: application/json
	I0318 11:30:51.864370    4320 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:30:51.865108    4320 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:30:51.865108    4320 round_trippers.go:577] Response Headers:
	I0318 11:30:51.865108    4320 round_trippers.go:580]     Audit-Id: 61071c90-9092-4c01-9714-1db6ff47e21e
	I0318 11:30:51.865108    4320 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 11:30:51.868667    4320 round_trippers.go:580]     Content-Type: application/json
	I0318 11:30:51.868667    4320 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: becf916a-7215-4b3e-9221-0fb1332b3065
	I0318 11:30:51.868667    4320 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d298c40d-3d5a-4ddc-81ba-7712fa7be4af
	I0318 11:30:51.868733    4320 round_trippers.go:580]     Content-Length: 1220
	I0318 11:30:51.868733    4320 round_trippers.go:580]     Date: Mon, 18 Mar 2024 11:30:51 GMT
	I0318 11:30:51.868837    4320 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"768be2fd-84ba-4a97-8089-49b62e6a039f","resourceVersion":"428","creationTimestamp":"2024-03-18T11:28:48Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-18T11:28:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0318 11:30:51.873907    4320 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0318 11:30:51.877874    4320 addons.go:505] duration metric: took 8.9289979s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0318 11:30:51.878037    4320 start.go:245] waiting for cluster config update ...
	I0318 11:30:51.878037    4320 start.go:254] writing updated cluster config ...
	I0318 11:30:51.883941    4320 ssh_runner.go:195] Run: rm -f paused
	I0318 11:30:52.035158    4320 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 11:30:52.037916    4320 out.go:177] * Done! kubectl is now configured to use "functional-611000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 18 11:30:25 functional-611000 dockerd[5172]: time="2024-03-18T11:30:25.175919771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:30:25 functional-611000 dockerd[5172]: time="2024-03-18T11:30:25.176225938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:30:27 functional-611000 cri-dockerd[5453]: time="2024-03-18T11:30:27Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Mar 18 11:30:29 functional-611000 dockerd[5172]: time="2024-03-18T11:30:29.281334021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:30:29 functional-611000 dockerd[5172]: time="2024-03-18T11:30:29.281403515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:30:29 functional-611000 dockerd[5172]: time="2024-03-18T11:30:29.281419413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:30:29 functional-611000 dockerd[5172]: time="2024-03-18T11:30:29.281511005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:30:29 functional-611000 dockerd[5172]: time="2024-03-18T11:30:29.426098070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:30:29 functional-611000 dockerd[5172]: time="2024-03-18T11:30:29.426184562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:30:29 functional-611000 dockerd[5172]: time="2024-03-18T11:30:29.426200261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:30:29 functional-611000 dockerd[5172]: time="2024-03-18T11:30:29.426583227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:30:29 functional-611000 dockerd[5172]: time="2024-03-18T11:30:29.458392448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:30:29 functional-611000 dockerd[5172]: time="2024-03-18T11:30:29.458513737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:30:29 functional-611000 dockerd[5172]: time="2024-03-18T11:30:29.458549334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:30:29 functional-611000 dockerd[5172]: time="2024-03-18T11:30:29.458723919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:30:29 functional-611000 cri-dockerd[5453]: time="2024-03-18T11:30:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eb340e48cff83a7db9c5a2caf85bff84abe695152c810d886cda0f6d1ae89f1c/resolv.conf as [nameserver 172.30.128.1]"
	Mar 18 11:30:29 functional-611000 cri-dockerd[5453]: time="2024-03-18T11:30:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8cbe1172b5715e7ff4cb1c8f4fbf292115a19037981f6ee57bf6d4d7905d5d78/resolv.conf as [nameserver 172.30.128.1]"
	Mar 18 11:30:29 functional-611000 dockerd[5172]: time="2024-03-18T11:30:29.836889664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:30:29 functional-611000 dockerd[5172]: time="2024-03-18T11:30:29.842051355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:30:29 functional-611000 dockerd[5172]: time="2024-03-18T11:30:29.842349325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:30:29 functional-611000 dockerd[5172]: time="2024-03-18T11:30:29.843114750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:30:30 functional-611000 dockerd[5172]: time="2024-03-18T11:30:30.033545540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:30:30 functional-611000 dockerd[5172]: time="2024-03-18T11:30:30.033642438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:30:30 functional-611000 dockerd[5172]: time="2024-03-18T11:30:30.033654637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:30:30 functional-611000 dockerd[5172]: time="2024-03-18T11:30:30.033779634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dbc9108b32106       ead0a4a53df89       2 minutes ago       Running             coredns                   1                   8cbe1172b5715       coredns-5dd5756b68-sxq4l
	280f18ece2b3d       83f6cc407eed8       2 minutes ago       Running             kube-proxy                2                   eb340e48cff83       kube-proxy-sh9ps
	72fbf606179bd       6e38f40d628db       2 minutes ago       Running             storage-provisioner       2                   365d39deb4ab5       storage-provisioner
	03bfacee82a00       e3db313c6dbc0       2 minutes ago       Running             kube-scheduler            2                   c707c213dd5f1       kube-scheduler-functional-611000
	396a12c590d55       d058aa5ab969c       2 minutes ago       Running             kube-controller-manager   2                   540a834e50f08       kube-controller-manager-functional-611000
	1b9820795043f       7fe0e6f37db33       2 minutes ago       Running             kube-apiserver            2                   9ba68fa6bd696       kube-apiserver-functional-611000
	4540e8d2d00a2       73deb9a3f7025       2 minutes ago       Running             etcd                      2                   d44fbd1d0c7b5       etcd-functional-611000
	1063ea4e2f519       83f6cc407eed8       2 minutes ago       Created             kube-proxy                1                   d2f1e6a8164c2       kube-proxy-sh9ps
	4139dea3dc63e       7fe0e6f37db33       2 minutes ago       Created             kube-apiserver            1                   d932944e55e52       kube-apiserver-functional-611000
	7ed628f4ee260       73deb9a3f7025       2 minutes ago       Created             etcd                      1                   222b8eb8da229       etcd-functional-611000
	67afe95ff6bfc       6e38f40d628db       2 minutes ago       Created             storage-provisioner       1                   f21c91a0234d0       storage-provisioner
	aa52b0846b541       d058aa5ab969c       2 minutes ago       Created             kube-controller-manager   1                   7ceb9e5328a9c       kube-controller-manager-functional-611000
	09072322ba1e3       e3db313c6dbc0       2 minutes ago       Exited              kube-scheduler            1                   dbf4c4348913a       kube-scheduler-functional-611000
	8a29c7935dd00       ead0a4a53df89       3 minutes ago       Exited              coredns                   0                   4a9a1adab613b       coredns-5dd5756b68-sxq4l
	
	
	==> coredns [8a29c7935dd0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa2b120b3007aba3ef70c07d73473d14986404bfc5683cceeb89e8950d5ace894afca7d43365324975f99d1b2da3d6473c722fe82795d39a4ee8b4b969b7aa71
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52525 - 3914 "HINFO IN 6624130777873271513.117187156301601033. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.025301387s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [dbc9108b3210] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa2b120b3007aba3ef70c07d73473d14986404bfc5683cceeb89e8950d5ace894afca7d43365324975f99d1b2da3d6473c722fe82795d39a4ee8b4b969b7aa71
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59286 - 39089 "HINFO IN 2869937696253900510.564037689292671587. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.034427898s
	
	
	==> describe nodes <==
	Name:               functional-611000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-611000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=functional-611000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T11_28_26_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 11:28:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-611000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 11:32:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 11:32:29 +0000   Mon, 18 Mar 2024 11:28:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 11:32:29 +0000   Mon, 18 Mar 2024 11:28:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 11:32:29 +0000   Mon, 18 Mar 2024 11:28:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 11:32:29 +0000   Mon, 18 Mar 2024 11:28:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.129.196
	  Hostname:    functional-611000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912876Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912876Ki
	  pods:               110
	System Info:
	  Machine ID:                 76e76ad2e5be4003b316fb24436337d9
	  System UUID:                00ff30e1-a060-ff4f-be15-94e6dd69e06c
	  Boot ID:                    4931aff4-18f3-4dd7-a9aa-4be2d4da1646
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-sxq4l                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m50s
	  kube-system                 etcd-functional-611000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-apiserver-functional-611000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-controller-manager-functional-611000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-proxy-sh9ps                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-scheduler-functional-611000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m48s                  kube-proxy       
	  Normal  Starting                 119s                   kube-proxy       
	  Normal  Starting                 4m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m12s (x8 over 4m12s)  kubelet          Node functional-611000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m12s (x8 over 4m12s)  kubelet          Node functional-611000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m12s (x7 over 4m12s)  kubelet          Node functional-611000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     4m3s                   kubelet          Node functional-611000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m3s                   kubelet          Node functional-611000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s                   kubelet          Node functional-611000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  4m3s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m3s                   kubelet          Starting kubelet.
	  Normal  NodeReady                4m                     kubelet          Node functional-611000 status is now: NodeReady
	  Normal  RegisteredNode           3m51s                  node-controller  Node functional-611000 event: Registered Node functional-611000 in Controller
	  Normal  Starting                 2m6s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m6s)    kubelet          Node functional-611000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m6s)    kubelet          Node functional-611000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x7 over 2m6s)    kubelet          Node functional-611000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           109s                   node-controller  Node functional-611000 event: Registered Node functional-611000 in Controller
	
	
	==> dmesg <==
	[  +5.365856] systemd-fstab-generator[1523]: Ignoring "noauto" option for root device
	[  +0.095826] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.594463] systemd-fstab-generator[1794]: Ignoring "noauto" option for root device
	[  +0.089877] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.771973] systemd-fstab-generator[2813]: Ignoring "noauto" option for root device
	[  +0.137392] kauditd_printk_skb: 62 callbacks suppressed
	[  +0.777997] hrtimer: interrupt took 4101279 ns
	[ +12.219609] systemd-fstab-generator[3464]: Ignoring "noauto" option for root device
	[  +0.203052] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.893385] kauditd_printk_skb: 65 callbacks suppressed
	[Mar18 11:30] systemd-fstab-generator[4694]: Ignoring "noauto" option for root device
	[  +0.594605] systemd-fstab-generator[4729]: Ignoring "noauto" option for root device
	[  +0.248416] systemd-fstab-generator[4741]: Ignoring "noauto" option for root device
	[  +0.285810] systemd-fstab-generator[4755]: Ignoring "noauto" option for root device
	[  +5.378734] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.781060] systemd-fstab-generator[5347]: Ignoring "noauto" option for root device
	[  +0.163975] systemd-fstab-generator[5359]: Ignoring "noauto" option for root device
	[  +0.164680] systemd-fstab-generator[5371]: Ignoring "noauto" option for root device
	[  +0.262158] systemd-fstab-generator[5394]: Ignoring "noauto" option for root device
	[  +0.807328] systemd-fstab-generator[5597]: Ignoring "noauto" option for root device
	[  +4.849828] systemd-fstab-generator[6447]: Ignoring "noauto" option for root device
	[  +0.100605] kauditd_printk_skb: 197 callbacks suppressed
	[  +5.758566] kauditd_printk_skb: 47 callbacks suppressed
	[ +11.271941] kauditd_printk_skb: 22 callbacks suppressed
	[  +2.433253] systemd-fstab-generator[7479]: Ignoring "noauto" option for root device
	
	
	==> etcd [4540e8d2d00a] <==
	{"level":"info","ts":"2024-03-18T11:30:24.68199Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T11:30:24.682083Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-03-18T11:30:24.682394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8c0e85ed7f45e28 switched to configuration voters=(630759441879555624)"}
	{"level":"info","ts":"2024-03-18T11:30:24.682443Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"876b472872815ddf","local-member-id":"8c0e85ed7f45e28","added-peer-id":"8c0e85ed7f45e28","added-peer-peer-urls":["https://172.30.129.196:2380"]}
	{"level":"info","ts":"2024-03-18T11:30:24.682542Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"876b472872815ddf","local-member-id":"8c0e85ed7f45e28","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T11:30:24.68257Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T11:30:24.684345Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T11:30:24.684565Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8c0e85ed7f45e28","initial-advertise-peer-urls":["https://172.30.129.196:2380"],"listen-peer-urls":["https://172.30.129.196:2380"],"advertise-client-urls":["https://172.30.129.196:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.30.129.196:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T11:30:24.684654Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T11:30:24.68476Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.30.129.196:2380"}
	{"level":"info","ts":"2024-03-18T11:30:24.684772Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.30.129.196:2380"}
	{"level":"info","ts":"2024-03-18T11:30:26.033804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8c0e85ed7f45e28 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-18T11:30:26.034023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8c0e85ed7f45e28 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-18T11:30:26.034294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8c0e85ed7f45e28 received MsgPreVoteResp from 8c0e85ed7f45e28 at term 2"}
	{"level":"info","ts":"2024-03-18T11:30:26.03447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8c0e85ed7f45e28 became candidate at term 3"}
	{"level":"info","ts":"2024-03-18T11:30:26.034598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8c0e85ed7f45e28 received MsgVoteResp from 8c0e85ed7f45e28 at term 3"}
	{"level":"info","ts":"2024-03-18T11:30:26.034723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8c0e85ed7f45e28 became leader at term 3"}
	{"level":"info","ts":"2024-03-18T11:30:26.034859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8c0e85ed7f45e28 elected leader 8c0e85ed7f45e28 at term 3"}
	{"level":"info","ts":"2024-03-18T11:30:26.060395Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8c0e85ed7f45e28","local-member-attributes":"{Name:functional-611000 ClientURLs:[https://172.30.129.196:2379]}","request-path":"/0/members/8c0e85ed7f45e28/attributes","cluster-id":"876b472872815ddf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T11:30:26.06058Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T11:30:26.084115Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.30.129.196:2379"}
	{"level":"info","ts":"2024-03-18T11:30:26.08416Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T11:30:26.085274Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T11:30:26.109742Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T11:30:26.109762Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [7ed628f4ee26] <==
	
	
	==> kernel <==
	 11:32:30 up 6 min,  0 users,  load average: 0.40, 0.53, 0.26
	Linux functional-611000 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1b9820795043] <==
	I0318 11:30:27.854768       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 11:30:27.855832       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0318 11:30:27.855969       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0318 11:30:27.956145       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 11:30:27.956750       1 aggregator.go:166] initial CRD sync complete...
	I0318 11:30:27.956960       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 11:30:27.957085       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 11:30:27.957304       1 cache.go:39] Caches are synced for autoregister controller
	I0318 11:30:27.959770       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 11:30:27.963525       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 11:30:28.010966       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 11:30:28.012258       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 11:30:28.012942       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 11:30:28.019355       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 11:30:28.020286       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 11:30:28.020439       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E0318 11:30:28.027358       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0318 11:30:28.824324       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 11:30:29.631164       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 11:30:29.644674       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 11:30:29.729334       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 11:30:29.786900       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 11:30:29.796164       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 11:30:40.589941       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 11:30:40.632751       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [4139dea3dc63] <==
	
	
	==> kube-controller-manager [396a12c590d5] <==
	I0318 11:30:40.531494       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 11:30:40.534045       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 11:30:40.543504       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 11:30:40.544853       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 11:30:40.550623       1 shared_informer.go:318] Caches are synced for job
	I0318 11:30:40.553084       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 11:30:40.554989       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 11:30:40.555084       1 shared_informer.go:318] Caches are synced for taint
	I0318 11:30:40.555289       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 11:30:40.555373       1 taint_manager.go:210] "Sending events to api server"
	I0318 11:30:40.555457       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 11:30:40.558258       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-611000"
	I0318 11:30:40.558337       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0318 11:30:40.558409       1 shared_informer.go:318] Caches are synced for deployment
	I0318 11:30:40.556929       1 event.go:307] "Event occurred" object="functional-611000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-611000 event: Registered Node functional-611000 in Controller"
	I0318 11:30:40.564095       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 11:30:40.571008       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 11:30:40.571885       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 11:30:40.578722       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 11:30:40.578925       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.897µs"
	I0318 11:30:40.599063       1 shared_informer.go:318] Caches are synced for HPA
	I0318 11:30:40.611742       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 11:30:41.020040       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 11:30:41.025457       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 11:30:41.025605       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [aa52b0846b54] <==
	
	
	==> kube-proxy [1063ea4e2f51] <==
	
	
	==> kube-proxy [280f18ece2b3] <==
	I0318 11:30:30.022018       1 server_others.go:69] "Using iptables proxy"
	I0318 11:30:30.042616       1 node.go:141] Successfully retrieved node IP: 172.30.129.196
	I0318 11:30:30.164357       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 11:30:30.164384       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 11:30:30.168587       1 server_others.go:152] "Using iptables Proxier"
	I0318 11:30:30.168632       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 11:30:30.168935       1 server.go:846] "Version info" version="v1.28.4"
	I0318 11:30:30.168945       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 11:30:30.176488       1 config.go:188] "Starting service config controller"
	I0318 11:30:30.176502       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 11:30:30.176535       1 config.go:97] "Starting endpoint slice config controller"
	I0318 11:30:30.176539       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 11:30:30.177009       1 config.go:315] "Starting node config controller"
	I0318 11:30:30.177016       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 11:30:30.277833       1 shared_informer.go:318] Caches are synced for node config
	I0318 11:30:30.277849       1 shared_informer.go:318] Caches are synced for service config
	I0318 11:30:30.277923       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [03bfacee82a0] <==
	I0318 11:30:26.815923       1 serving.go:348] Generated self-signed cert in-memory
	W0318 11:30:27.896176       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0318 11:30:27.896221       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 11:30:27.896231       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0318 11:30:27.896238       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 11:30:27.958129       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 11:30:27.958174       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 11:30:27.960547       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 11:30:27.960766       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 11:30:27.963166       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 11:30:27.963240       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 11:30:28.061532       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [09072322ba1e] <==
	I0318 11:30:20.865064       1 serving.go:348] Generated self-signed cert in-memory
	W0318 11:30:21.649189       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://172.30.129.196:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 172.30.129.196:8441: connect: connection refused
	W0318 11:30:21.649281       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0318 11:30:21.649291       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 11:30:21.653244       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 11:30:21.653355       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 11:30:21.655474       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 11:30:21.655734       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0318 11:30:21.655878       1 shared_informer.go:314] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 11:30:21.656027       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 11:30:21.656199       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0318 11:30:21.656272       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0318 11:30:21.656297       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 11:30:21.656312       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0318 11:30:21.656383       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0318 11:30:21.656954       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 18 11:30:27 functional-611000 kubelet[6454]: I0318 11:30:27.984895    6454 kubelet_node_status.go:108] "Node was previously registered" node="functional-611000"
	Mar 18 11:30:27 functional-611000 kubelet[6454]: I0318 11:30:27.985782    6454 kubelet_node_status.go:73] "Successfully registered node" node="functional-611000"
	Mar 18 11:30:27 functional-611000 kubelet[6454]: I0318 11:30:27.987775    6454 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 18 11:30:27 functional-611000 kubelet[6454]: I0318 11:30:27.988734    6454 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 18 11:30:28 functional-611000 kubelet[6454]: E0318 11:30:28.145206    6454 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-functional-611000\" already exists" pod="kube-system/kube-apiserver-functional-611000"
	Mar 18 11:30:28 functional-611000 kubelet[6454]: I0318 11:30:28.748292    6454 apiserver.go:52] "Watching apiserver"
	Mar 18 11:30:28 functional-611000 kubelet[6454]: I0318 11:30:28.751451    6454 topology_manager.go:215] "Topology Admit Handler" podUID="da5ff102-6ce0-4f7c-bd28-923b0dbd135f" podNamespace="kube-system" podName="kube-proxy-sh9ps"
	Mar 18 11:30:28 functional-611000 kubelet[6454]: I0318 11:30:28.751577    6454 topology_manager.go:215] "Topology Admit Handler" podUID="5ebb5db0-da82-4fc4-bc0e-3e680e05723f" podNamespace="kube-system" podName="coredns-5dd5756b68-sxq4l"
	Mar 18 11:30:28 functional-611000 kubelet[6454]: I0318 11:30:28.751652    6454 topology_manager.go:215] "Topology Admit Handler" podUID="ea9b5188-5461-42f3-af2a-1c0652841cdd" podNamespace="kube-system" podName="storage-provisioner"
	Mar 18 11:30:28 functional-611000 kubelet[6454]: I0318 11:30:28.770991    6454 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 18 11:30:28 functional-611000 kubelet[6454]: I0318 11:30:28.837815    6454 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ea9b5188-5461-42f3-af2a-1c0652841cdd-tmp\") pod \"storage-provisioner\" (UID: \"ea9b5188-5461-42f3-af2a-1c0652841cdd\") " pod="kube-system/storage-provisioner"
	Mar 18 11:30:28 functional-611000 kubelet[6454]: I0318 11:30:28.838005    6454 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da5ff102-6ce0-4f7c-bd28-923b0dbd135f-lib-modules\") pod \"kube-proxy-sh9ps\" (UID: \"da5ff102-6ce0-4f7c-bd28-923b0dbd135f\") " pod="kube-system/kube-proxy-sh9ps"
	Mar 18 11:30:28 functional-611000 kubelet[6454]: I0318 11:30:28.838107    6454 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da5ff102-6ce0-4f7c-bd28-923b0dbd135f-xtables-lock\") pod \"kube-proxy-sh9ps\" (UID: \"da5ff102-6ce0-4f7c-bd28-923b0dbd135f\") " pod="kube-system/kube-proxy-sh9ps"
	Mar 18 11:30:29 functional-611000 kubelet[6454]: I0318 11:30:29.053071    6454 scope.go:117] "RemoveContainer" containerID="67afe95ff6bfcc6856cb8d19053c57e05d42fc9ece8b046d93ccb9879eb06cf3"
	Mar 18 11:30:33 functional-611000 kubelet[6454]: I0318 11:30:33.520125    6454 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Mar 18 11:31:23 functional-611000 kubelet[6454]: E0318 11:31:23.820335    6454 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 11:31:23 functional-611000 kubelet[6454]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 11:31:23 functional-611000 kubelet[6454]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 11:31:23 functional-611000 kubelet[6454]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 11:31:23 functional-611000 kubelet[6454]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 11:32:23 functional-611000 kubelet[6454]: E0318 11:32:23.824517    6454 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 11:32:23 functional-611000 kubelet[6454]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 11:32:23 functional-611000 kubelet[6454]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 11:32:23 functional-611000 kubelet[6454]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 11:32:23 functional-611000 kubelet[6454]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [67afe95ff6bf] <==
	
	
	==> storage-provisioner [72fbf606179b] <==
	I0318 11:30:29.431897       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 11:30:29.445529       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 11:30:29.446083       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 11:30:46.860067       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 11:30:46.860576       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-611000_4441542e-deb5-453f-aded-f0bb06c4970e!
	I0318 11:30:46.860665       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e7165e9c-8c02-45ea-a858-f30ddfe675b0", APIVersion:"v1", ResourceVersion:"574", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-611000_4441542e-deb5-453f-aded-f0bb06c4970e became leader
	I0318 11:30:46.962630       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-611000_4441542e-deb5-453f-aded-f0bb06c4970e!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:32:22.736881    3536 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-611000 -n functional-611000
E0318 11:32:36.198405   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-611000 -n functional-611000: (11.0429067s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-611000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (31.51s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-611000 config unset cpus" to be -""- but got *"W0318 11:35:18.328822    7824 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-611000 config get cpus: exit status 14 (328.8592ms)

                                                
                                                
** stderr ** 
	W0318 11:35:18.656872    7744 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-611000 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0318 11:35:18.656872    7744 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-611000 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0318 11:35:18.994144   10752 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-611000 config get cpus" to be -""- but got *"W0318 11:35:19.442435   13396 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-611000 config unset cpus" to be -""- but got *"W0318 11:35:19.946564    9928 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-611000 config get cpus: exit status 14 (297.9085ms)

                                                
                                                
** stderr ** 
	W0318 11:35:20.325748    1280 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-611000 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0318 11:35:20.325748    1280 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-611000 service --namespace=default --https --url hello-node: exit status 1 (15.0182236s)

                                                
                                                
** stderr ** 
	W0318 11:38:01.176017    7156 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-611000 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-611000 service hello-node --url --format={{.IP}}: exit status 1 (15.0176119s)

                                                
                                                
** stderr ** 
	W0318 11:38:16.189494    8372 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-611000 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-611000 service hello-node --url: exit status 1 (15.0267893s)

                                                
                                                
** stderr ** 
	W0318 11:38:31.203093    6748 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-611000 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (66.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-747000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-747000 -- exec busybox-5b5d89c9d6-bfx2x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-747000 -- exec busybox-5b5d89c9d6-bfx2x -- sh -c "ping -c 1 172.30.128.1"
E0318 11:55:42.078243   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-747000 -- exec busybox-5b5d89c9d6-bfx2x -- sh -c "ping -c 1 172.30.128.1": exit status 1 (10.5050992s)

                                                
                                                
-- stdout --
	PING 172.30.128.1 (172.30.128.1): 56 data bytes
	
	--- 172.30.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:55:41.256588    1272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.30.128.1) from pod (busybox-5b5d89c9d6-bfx2x): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-747000 -- exec busybox-5b5d89c9d6-ln6sd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-747000 -- exec busybox-5b5d89c9d6-ln6sd -- sh -c "ping -c 1 172.30.128.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-747000 -- exec busybox-5b5d89c9d6-ln6sd -- sh -c "ping -c 1 172.30.128.1": exit status 1 (10.4567263s)

                                                
                                                
-- stdout --
	PING 172.30.128.1 (172.30.128.1): 56 data bytes
	
	--- 172.30.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:55:52.256611    5668 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.30.128.1) from pod (busybox-5b5d89c9d6-ln6sd): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-747000 -- exec busybox-5b5d89c9d6-qvfgv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-747000 -- exec busybox-5b5d89c9d6-qvfgv -- sh -c "ping -c 1 172.30.128.1"
E0318 11:56:13.043502   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-747000 -- exec busybox-5b5d89c9d6-qvfgv -- sh -c "ping -c 1 172.30.128.1": exit status 1 (10.4768487s)

                                                
                                                
-- stdout --
	PING 172.30.128.1 (172.30.128.1): 56 data bytes
	
	--- 172.30.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:56:03.214676    7112 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.30.128.1) from pod (busybox-5b5d89c9d6-qvfgv): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-747000 -n ha-747000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-747000 -n ha-747000: (11.656397s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 logs -n 25: (8.2749452s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-611000                    | functional-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:38 UTC | 18 Mar 24 11:38 UTC |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-611000 image build -t     | functional-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:38 UTC | 18 Mar 24 11:38 UTC |
	|         | localhost/my-image:functional-611000 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-611000 image ls           | functional-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:38 UTC | 18 Mar 24 11:38 UTC |
	| delete  | -p functional-611000                 | functional-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:43 UTC | 18 Mar 24 11:44 UTC |
	| start   | -p ha-747000 --wait=true             | ha-747000         | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:44 UTC | 18 Mar 24 11:54 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-747000 -- apply -f             | ha-747000         | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:55 UTC | 18 Mar 24 11:55 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-747000 -- rollout status       | ha-747000         | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:55 UTC | 18 Mar 24 11:55 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-747000 -- get pods -o          | ha-747000         | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:55 UTC | 18 Mar 24 11:55 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-747000 -- get pods -o          | ha-747000         | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:55 UTC | 18 Mar 24 11:55 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-747000 -- exec                 | ha-747000         | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:55 UTC | 18 Mar 24 11:55 UTC |
	|         | busybox-5b5d89c9d6-bfx2x --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-747000 -- exec                 | ha-747000         | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:55 UTC | 18 Mar 24 11:55 UTC |
	|         | busybox-5b5d89c9d6-ln6sd --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-747000 -- exec                 | ha-747000         | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:55 UTC | 18 Mar 24 11:55 UTC |
	|         | busybox-5b5d89c9d6-qvfgv --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-747000 -- exec                 | ha-747000         | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:55 UTC | 18 Mar 24 11:55 UTC |
	|         | busybox-5b5d89c9d6-bfx2x --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-747000 -- exec                 | ha-747000         | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:55 UTC | 18 Mar 24 11:55 UTC |
	|         | busybox-5b5d89c9d6-ln6sd --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-747000 -- exec                 | ha-747000         | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:55 UTC | 18 Mar 24 11:55 UTC |
	|         | busybox-5b5d89c9d6-qvfgv --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-747000 -- exec                 | ha-747000         | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:55 UTC | 18 Mar 24 11:55 UTC |
	|         | busybox-5b5d89c9d6-bfx2x -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-747000 -- exec                 | ha-747000         | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:55 UTC | 18 Mar 24 11:55 UTC |
	|         | busybox-5b5d89c9d6-ln6sd -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-747000 -- exec                 | ha-747000         | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:55 UTC | 18 Mar 24 11:55 UTC |
	|         | busybox-5b5d89c9d6-qvfgv -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-747000 -- get pods -o          | ha-747000         | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:55 UTC | 18 Mar 24 11:55 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-747000 -- exec                 | ha-747000         | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:55 UTC | 18 Mar 24 11:55 UTC |
	|         | busybox-5b5d89c9d6-bfx2x             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-747000 -- exec                 | ha-747000         | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:55 UTC |                     |
	|         | busybox-5b5d89c9d6-bfx2x -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.30.128.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-747000 -- exec                 | ha-747000         | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:55 UTC | 18 Mar 24 11:55 UTC |
	|         | busybox-5b5d89c9d6-ln6sd             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-747000 -- exec                 | ha-747000         | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:55 UTC |                     |
	|         | busybox-5b5d89c9d6-ln6sd -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.30.128.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-747000 -- exec                 | ha-747000         | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:56 UTC | 18 Mar 24 11:56 UTC |
	|         | busybox-5b5d89c9d6-qvfgv             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-747000 -- exec                 | ha-747000         | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:56 UTC |                     |
	|         | busybox-5b5d89c9d6-qvfgv -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.30.128.1            |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 11:44:07
	Running on machine: minikube3
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 11:44:07.032139   13008 out.go:291] Setting OutFile to fd 800 ...
	I0318 11:44:07.032389   13008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:44:07.032389   13008 out.go:304] Setting ErrFile to fd 1020...
	I0318 11:44:07.032389   13008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:44:07.051810   13008 out.go:298] Setting JSON to false
	I0318 11:44:07.055460   13008 start.go:129] hostinfo: {"hostname":"minikube3","uptime":310824,"bootTime":1710451423,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0318 11:44:07.055460   13008 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 11:44:07.065824   13008 out.go:177] * [ha-747000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0318 11:44:07.070739   13008 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 11:44:07.069679   13008 notify.go:220] Checking for updates...
	I0318 11:44:07.075794   13008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 11:44:07.076618   13008 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0318 11:44:07.081479   13008 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 11:44:07.082508   13008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 11:44:07.085163   13008 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 11:44:11.826968   13008 out.go:177] * Using the hyperv driver based on user configuration
	I0318 11:44:11.830802   13008 start.go:297] selected driver: hyperv
	I0318 11:44:11.830802   13008 start.go:901] validating driver "hyperv" against <nil>
	I0318 11:44:11.830802   13008 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 11:44:11.878104   13008 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 11:44:11.878988   13008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 11:44:11.878988   13008 cni.go:84] Creating CNI manager for ""
	I0318 11:44:11.878988   13008 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0318 11:44:11.878988   13008 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 11:44:11.880273   13008 start.go:340] cluster config:
	{Name:ha-747000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-747000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:44:11.880548   13008 iso.go:125] acquiring lock: {Name:mkefa7a887b9de67c22e0418da52288a5b488924 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 11:44:11.884916   13008 out.go:177] * Starting "ha-747000" primary control-plane node in "ha-747000" cluster
	I0318 11:44:11.887439   13008 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:44:11.887575   13008 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 11:44:11.887575   13008 cache.go:56] Caching tarball of preloaded images
	I0318 11:44:11.887575   13008 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 11:44:11.888382   13008 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 11:44:11.888382   13008 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json ...
	I0318 11:44:11.889119   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json: {Name:mkd01eb0d386b6895348db840d4e4956154276ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:44:11.890247   13008 start.go:360] acquireMachinesLock for ha-747000: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 11:44:11.890247   13008 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-747000"
	I0318 11:44:11.890874   13008 start.go:93] Provisioning new machine with config: &{Name:ha-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-747000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:44:11.891055   13008 start.go:125] createHost starting for "" (driver="hyperv")
	I0318 11:44:11.893440   13008 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 11:44:11.893980   13008 start.go:159] libmachine.API.Create for "ha-747000" (driver="hyperv")
	I0318 11:44:11.893980   13008 client.go:168] LocalClient.Create starting
	I0318 11:44:11.894581   13008 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0318 11:44:11.894581   13008 main.go:141] libmachine: Decoding PEM data...
	I0318 11:44:11.894581   13008 main.go:141] libmachine: Parsing certificate...
	I0318 11:44:11.894581   13008 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0318 11:44:11.895261   13008 main.go:141] libmachine: Decoding PEM data...
	I0318 11:44:11.895261   13008 main.go:141] libmachine: Parsing certificate...
	I0318 11:44:11.895408   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0318 11:44:13.657992   13008 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0318 11:44:13.657992   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:13.658107   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0318 11:44:15.164283   13008 main.go:141] libmachine: [stdout =====>] : False
	
	I0318 11:44:15.164283   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:15.164283   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:44:16.522292   13008 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:44:16.528578   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:16.528754   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:44:19.670864   13008 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:44:19.670953   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:19.673909   13008 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 11:44:20.054081   13008 main.go:141] libmachine: Creating SSH key...
	I0318 11:44:20.145667   13008 main.go:141] libmachine: Creating VM...
	I0318 11:44:20.145667   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:44:22.708445   13008 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:44:22.719546   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:22.719546   13008 main.go:141] libmachine: Using switch "Default Switch"
	I0318 11:44:22.719546   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:44:24.223539   13008 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:44:24.230902   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:24.230902   13008 main.go:141] libmachine: Creating VHD
	I0318 11:44:24.230902   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0318 11:44:27.618811   13008 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 87226964-6452-40BB-8CCF-F54D1DA7593F
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0318 11:44:27.618811   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:27.630006   13008 main.go:141] libmachine: Writing magic tar header
	I0318 11:44:27.630006   13008 main.go:141] libmachine: Writing SSH key tar header
	I0318 11:44:27.638679   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0318 11:44:30.588703   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:44:30.589017   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:30.589287   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\disk.vhd' -SizeBytes 20000MB
	I0318 11:44:32.866741   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:44:32.866741   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:32.877615   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-747000 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0318 11:44:36.184208   13008 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-747000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0318 11:44:36.184208   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:36.184208   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-747000 -DynamicMemoryEnabled $false
	I0318 11:44:38.135542   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:44:38.135542   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:38.146281   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-747000 -Count 2
	I0318 11:44:40.079602   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:44:40.079602   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:40.079602   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-747000 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\boot2docker.iso'
	I0318 11:44:42.364784   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:44:42.376312   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:42.376312   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-747000 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\disk.vhd'
	I0318 11:44:44.689877   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:44:44.689877   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:44.700815   13008 main.go:141] libmachine: Starting VM...
	I0318 11:44:44.700815   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-747000
	I0318 11:44:47.538271   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:44:47.541027   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:47.541027   13008 main.go:141] libmachine: Waiting for host to start...
	I0318 11:44:47.541130   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:44:49.572738   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:44:49.572738   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:49.579498   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:44:51.850944   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:44:51.850944   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:52.859256   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:44:54.900759   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:44:54.907213   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:54.907213   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:44:57.234254   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:44:57.234312   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:58.248759   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:00.276076   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:00.276199   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:00.276335   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:02.673181   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:45:02.673419   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:03.679813   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:05.696960   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:05.701490   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:05.701490   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:08.050571   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:45:08.050571   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:09.061208   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:11.088085   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:11.088162   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:11.088162   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:13.349614   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:45:13.349614   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:13.349614   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:15.192650   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:15.192737   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:15.192737   13008 machine.go:94] provisionDockerMachine start ...
	I0318 11:45:15.192737   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:17.160963   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:17.171920   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:17.171920   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:19.425519   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:45:19.425519   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:19.442199   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:45:19.449110   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.135.65 22 <nil> <nil>}
	I0318 11:45:19.449110   13008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 11:45:19.566466   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 11:45:19.566466   13008 buildroot.go:166] provisioning hostname "ha-747000"
	I0318 11:45:19.566466   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:21.478083   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:21.478083   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:21.478083   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:23.705343   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:45:23.716166   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:23.722292   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:45:23.722292   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.135.65 22 <nil> <nil>}
	I0318 11:45:23.722292   13008 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-747000 && echo "ha-747000" | sudo tee /etc/hostname
	I0318 11:45:23.860797   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-747000
	
	I0318 11:45:23.860797   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:25.744527   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:25.757345   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:25.757345   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:28.017495   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:45:28.027634   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:28.033238   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:45:28.033442   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.135.65 22 <nil> <nil>}
	I0318 11:45:28.033442   13008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-747000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-747000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-747000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 11:45:28.163753   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 11:45:28.163753   13008 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0318 11:45:28.163753   13008 buildroot.go:174] setting up certificates
	I0318 11:45:28.163753   13008 provision.go:84] configureAuth start
	I0318 11:45:28.163753   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:30.027731   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:30.027731   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:30.027731   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:32.280153   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:45:32.280153   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:32.289594   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:34.182838   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:34.193025   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:34.193025   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:36.463235   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:45:36.463235   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:36.466513   13008 provision.go:143] copyHostCerts
	I0318 11:45:36.466662   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0318 11:45:36.466859   13008 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0318 11:45:36.466859   13008 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0318 11:45:36.467402   13008 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 11:45:36.468119   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0318 11:45:36.468886   13008 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0318 11:45:36.468886   13008 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0318 11:45:36.469132   13008 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 11:45:36.470097   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0318 11:45:36.470934   13008 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0318 11:45:36.470934   13008 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0318 11:45:36.471344   13008 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 11:45:36.472363   13008 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-747000 san=[127.0.0.1 172.30.135.65 ha-747000 localhost minikube]
	I0318 11:45:37.046899   13008 provision.go:177] copyRemoteCerts
	I0318 11:45:37.059060   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 11:45:37.059060   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:38.904612   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:38.914249   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:38.914249   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:41.130809   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:45:41.130809   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:41.141594   13008 sshutil.go:53] new ssh client: &{IP:172.30.135.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\id_rsa Username:docker}
	I0318 11:45:41.242209   13008 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1831178s)
	I0318 11:45:41.242209   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 11:45:41.242889   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 11:45:41.277406   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 11:45:41.277406   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0318 11:45:41.316259   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 11:45:41.316511   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 11:45:41.356624   13008 provision.go:87] duration metric: took 13.1927742s to configureAuth
	I0318 11:45:41.356811   13008 buildroot.go:189] setting minikube options for container-runtime
	I0318 11:45:41.357355   13008 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:45:41.357484   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:43.233188   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:43.242989   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:43.242989   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:45.460315   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:45:45.470734   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:45.475858   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:45:45.475858   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.135.65 22 <nil> <nil>}
	I0318 11:45:45.475858   13008 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 11:45:45.600089   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 11:45:45.600089   13008 buildroot.go:70] root file system type: tmpfs
	I0318 11:45:45.600089   13008 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 11:45:45.600089   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:47.414679   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:47.414679   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:47.426567   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:49.680235   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:45:49.680421   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:49.685834   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:45:49.685834   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.135.65 22 <nil> <nil>}
	I0318 11:45:49.686416   13008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 11:45:49.831730   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 11:45:49.831730   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:51.716195   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:51.716195   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:51.726884   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:53.992793   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:45:53.992793   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:54.010416   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:45:54.010663   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.135.65 22 <nil> <nil>}
	I0318 11:45:54.010663   13008 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 11:45:55.996206   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 11:45:55.996273   13008 machine.go:97] duration metric: took 40.8032344s to provisionDockerMachine
	I0318 11:45:55.996273   13008 client.go:171] duration metric: took 1m44.1013705s to LocalClient.Create
	I0318 11:45:55.996331   13008 start.go:167] duration metric: took 1m44.1015789s to libmachine.API.Create "ha-747000"
	I0318 11:45:55.996331   13008 start.go:293] postStartSetup for "ha-747000" (driver="hyperv")
	I0318 11:45:55.996331   13008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 11:45:56.007051   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 11:45:56.007051   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:57.872522   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:57.882834   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:57.882834   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:46:00.116571   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:46:00.116571   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:00.122265   13008 sshutil.go:53] new ssh client: &{IP:172.30.135.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\id_rsa Username:docker}
	I0318 11:46:00.222244   13008 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2149519s)
	I0318 11:46:00.234801   13008 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 11:46:00.241368   13008 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 11:46:00.241639   13008 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0318 11:46:00.242182   13008 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0318 11:46:00.243699   13008 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> 134242.pem in /etc/ssl/certs
	I0318 11:46:00.243804   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /etc/ssl/certs/134242.pem
	I0318 11:46:00.254597   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 11:46:00.270755   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /etc/ssl/certs/134242.pem (1708 bytes)
	I0318 11:46:00.312647   13008 start.go:296] duration metric: took 4.3162839s for postStartSetup
	I0318 11:46:00.316829   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:46:02.184402   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:46:02.184611   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:02.184757   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:46:04.442373   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:46:04.442373   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:04.453229   13008 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json ...
	I0318 11:46:04.456240   13008 start.go:128] duration metric: took 1m52.5642822s to createHost
	I0318 11:46:04.456240   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:46:06.329235   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:46:06.329235   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:06.340563   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:46:08.556116   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:46:08.556177   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:08.563590   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:46:08.564213   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.135.65 22 <nil> <nil>}
	I0318 11:46:08.564213   13008 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 11:46:08.682748   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710762368.674503471
	
	I0318 11:46:08.682748   13008 fix.go:216] guest clock: 1710762368.674503471
	I0318 11:46:08.682748   13008 fix.go:229] Guest: 2024-03-18 11:46:08.674503471 +0000 UTC Remote: 2024-03-18 11:46:04.4562409 +0000 UTC m=+117.579389501 (delta=4.218262571s)
	I0318 11:46:08.682748   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:46:10.576168   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:46:10.589433   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:10.589586   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:46:12.846986   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:46:12.846986   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:12.853145   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:46:12.854057   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.135.65 22 <nil> <nil>}
	I0318 11:46:12.854057   13008 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710762368
	I0318 11:46:12.978087   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 11:46:08 UTC 2024
	
	I0318 11:46:12.978087   13008 fix.go:236] clock set: Mon Mar 18 11:46:08 UTC 2024
	 (err=<nil>)
	I0318 11:46:12.978087   13008 start.go:83] releasing machines lock for "ha-747000", held for 2m1.0869423s
	I0318 11:46:12.978626   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:46:14.840474   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:46:14.851795   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:14.851795   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:46:17.198650   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:46:17.211374   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:17.215261   13008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 11:46:17.215398   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:46:17.227256   13008 ssh_runner.go:195] Run: cat /version.json
	I0318 11:46:17.227256   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:46:19.369466   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:46:19.369466   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:19.369466   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:46:19.380992   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:46:19.380992   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:19.380992   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:46:21.933180   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:46:21.933180   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:21.945198   13008 sshutil.go:53] new ssh client: &{IP:172.30.135.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\id_rsa Username:docker}
	I0318 11:46:21.963910   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:46:21.964898   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:21.964995   13008 sshutil.go:53] new ssh client: &{IP:172.30.135.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\id_rsa Username:docker}
	I0318 11:46:22.123390   13008 ssh_runner.go:235] Completed: cat /version.json: (4.896097s)
	I0318 11:46:22.123390   13008 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9080232s)
	I0318 11:46:22.140172   13008 ssh_runner.go:195] Run: systemctl --version
	I0318 11:46:22.159231   13008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 11:46:22.170015   13008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 11:46:22.183662   13008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 11:46:22.205768   13008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 11:46:22.205768   13008 start.go:494] detecting cgroup driver to use...
	I0318 11:46:22.208933   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:46:22.249951   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 11:46:22.285649   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 11:46:22.302648   13008 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 11:46:22.313875   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 11:46:22.347924   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:46:22.378024   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 11:46:22.405948   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:46:22.434194   13008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 11:46:22.464641   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 11:46:22.491983   13008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 11:46:22.520957   13008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 11:46:22.551613   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:46:22.727855   13008 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 11:46:22.757512   13008 start.go:494] detecting cgroup driver to use...
	I0318 11:46:22.769010   13008 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 11:46:22.801357   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:46:22.833299   13008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 11:46:22.880005   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:46:22.915002   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:46:22.945885   13008 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 11:46:23.011251   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:46:23.030827   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:46:23.074737   13008 ssh_runner.go:195] Run: which cri-dockerd
	I0318 11:46:23.093090   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 11:46:23.108462   13008 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 11:46:23.149215   13008 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 11:46:23.330658   13008 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 11:46:23.481933   13008 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 11:46:23.482045   13008 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 11:46:23.520424   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:46:23.671160   13008 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 11:46:26.123977   13008 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4505835s)
	I0318 11:46:26.138369   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 11:46:26.169524   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:46:26.205440   13008 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 11:46:26.367415   13008 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 11:46:26.547310   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:46:26.704491   13008 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 11:46:26.741226   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:46:26.772981   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:46:26.945720   13008 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 11:46:27.037532   13008 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 11:46:27.048854   13008 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 11:46:27.059397   13008 start.go:562] Will wait 60s for crictl version
	I0318 11:46:27.070692   13008 ssh_runner.go:195] Run: which crictl
	I0318 11:46:27.090177   13008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 11:46:27.152133   13008 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 11:46:27.163143   13008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:46:27.211966   13008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:46:27.242452   13008 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 11:46:27.243048   13008 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 11:46:27.247444   13008 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 11:46:27.247444   13008 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 11:46:27.247444   13008 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 11:46:27.247444   13008 ip.go:207] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:93:df:b5 Flags:up|broadcast|multicast|running}
	I0318 11:46:27.249700   13008 ip.go:210] interface addr: fe80::572b:6b1d:9130:e88b/64
	I0318 11:46:27.250778   13008 ip.go:210] interface addr: 172.30.128.1/20
	I0318 11:46:27.262066   13008 ssh_runner.go:195] Run: grep 172.30.128.1	host.minikube.internal$ /etc/hosts
	I0318 11:46:27.262851   13008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:46:27.300758   13008 kubeadm.go:877] updating cluster {Name:ha-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:ha-747000 Namespace:default APIServerHAVIP:172.30.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.135.65 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 11:46:27.300758   13008 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:46:27.306184   13008 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 11:46:27.333416   13008 docker.go:685] Got preloaded images: 
	I0318 11:46:27.333506   13008 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0318 11:46:27.345750   13008 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 11:46:27.375431   13008 ssh_runner.go:195] Run: which lz4
	I0318 11:46:27.381639   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0318 11:46:27.392862   13008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 11:46:27.398844   13008 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 11:46:27.399052   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0318 11:46:29.670065   13008 docker.go:649] duration metric: took 2.2883084s to copy over tarball
	I0318 11:46:29.689407   13008 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 11:46:39.977608   13008 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (10.2881249s)
	I0318 11:46:39.977608   13008 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 11:46:40.047050   13008 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 11:46:40.065718   13008 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0318 11:46:40.105195   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:46:40.288361   13008 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 11:46:43.428707   13008 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1403226s)
	I0318 11:46:43.440165   13008 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 11:46:43.463294   13008 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 11:46:43.463294   13008 cache_images.go:84] Images are preloaded, skipping loading
	I0318 11:46:43.463294   13008 kubeadm.go:928] updating node { 172.30.135.65 8443 v1.28.4 docker true true} ...
	I0318 11:46:43.463294   13008 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-747000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.135.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-747000 Namespace:default APIServerHAVIP:172.30.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 11:46:43.475511   13008 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 11:46:43.510039   13008 cni.go:84] Creating CNI manager for ""
	I0318 11:46:43.510039   13008 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 11:46:43.510039   13008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 11:46:43.510039   13008 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.30.135.65 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-747000 NodeName:ha-747000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.30.135.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.30.135.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 11:46:43.510039   13008 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.30.135.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-747000"
	  kubeletExtraArgs:
	    node-ip: 172.30.135.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.30.135.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 11:46:43.510039   13008 kube-vip.go:111] generating kube-vip config ...
	I0318 11:46:43.521535   13008 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 11:46:43.544280   13008 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 11:46:43.546837   13008 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.30.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 11:46:43.558214   13008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 11:46:43.571000   13008 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 11:46:43.583638   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0318 11:46:43.601075   13008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0318 11:46:43.628138   13008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 11:46:43.653641   13008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0318 11:46:43.678284   13008 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 11:46:43.716355   13008 ssh_runner.go:195] Run: grep 172.30.143.254	control-plane.minikube.internal$ /etc/hosts
	I0318 11:46:43.718897   13008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:46:43.752058   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:46:43.902848   13008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:46:43.928902   13008 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000 for IP: 172.30.135.65
	I0318 11:46:43.928960   13008 certs.go:194] generating shared ca certs ...
	I0318 11:46:43.929033   13008 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:46:43.929587   13008 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0318 11:46:43.930209   13008 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0318 11:46:43.930536   13008 certs.go:256] generating profile certs ...
	I0318 11:46:43.931127   13008 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\client.key
	I0318 11:46:43.931127   13008 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\client.crt with IP's: []
	I0318 11:46:44.147957   13008 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\client.crt ...
	I0318 11:46:44.147957   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\client.crt: {Name:mkf0926a62a72687e3478d66ecfd2d91c0286649 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:46:44.152355   13008 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\client.key ...
	I0318 11:46:44.152355   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\client.key: {Name:mk2825fe1c54fbd811fb2ec7fcf9ab33d1c62f84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:46:44.154003   13008 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.8170f993
	I0318 11:46:44.155189   13008 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.8170f993 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.30.135.65 172.30.143.254]
	I0318 11:46:44.437644   13008 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.8170f993 ...
	I0318 11:46:44.437644   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.8170f993: {Name:mkcc8019f72c1787cfa5e3023e126ce259846293 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:46:44.439558   13008 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.8170f993 ...
	I0318 11:46:44.439558   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.8170f993: {Name:mk57b087c93ad949bc4139008701ca0f8eb1f41e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:46:44.440918   13008 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.8170f993 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt
	I0318 11:46:44.448110   13008 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.8170f993 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key
	I0318 11:46:44.452045   13008 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key
	I0318 11:46:44.452045   13008 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.crt with IP's: []
	I0318 11:46:44.609396   13008 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.crt ...
	I0318 11:46:44.609396   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.crt: {Name:mk4475051e95a4a9524d668d10fde55f3898dcd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:46:44.612089   13008 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key ...
	I0318 11:46:44.612089   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key: {Name:mkb9511d07ddcce77bfc37ffba5d0139c7c0cdb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:46:44.613465   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 11:46:44.614480   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 11:46:44.614656   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 11:46:44.614656   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 11:46:44.614656   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 11:46:44.614656   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 11:46:44.615223   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 11:46:44.619704   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 11:46:44.630171   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem (1338 bytes)
	W0318 11:46:44.630171   13008 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424_empty.pem, impossibly tiny 0 bytes
	I0318 11:46:44.630171   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0318 11:46:44.630910   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 11:46:44.630910   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 11:46:44.630910   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 11:46:44.631920   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem (1708 bytes)
	I0318 11:46:44.632269   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem -> /usr/share/ca-certificates/13424.pem
	I0318 11:46:44.632269   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /usr/share/ca-certificates/134242.pem
	I0318 11:46:44.632269   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:46:44.633986   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 11:46:44.672830   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0318 11:46:44.709674   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 11:46:44.750028   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 11:46:44.788584   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 11:46:44.827021   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 11:46:44.864525   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 11:46:44.900890   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 11:46:44.936552   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem --> /usr/share/ca-certificates/13424.pem (1338 bytes)
	I0318 11:46:44.974197   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /usr/share/ca-certificates/134242.pem (1708 bytes)
	I0318 11:46:45.007962   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 11:46:45.056567   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 11:46:45.104189   13008 ssh_runner.go:195] Run: openssl version
	I0318 11:46:45.125917   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134242.pem && ln -fs /usr/share/ca-certificates/134242.pem /etc/ssl/certs/134242.pem"
	I0318 11:46:45.157407   13008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134242.pem
	I0318 11:46:45.165942   13008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 11:25 /usr/share/ca-certificates/134242.pem
	I0318 11:46:45.178153   13008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134242.pem
	I0318 11:46:45.198094   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134242.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 11:46:45.227910   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 11:46:45.254850   13008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:46:45.257890   13008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 11:07 /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:46:45.263587   13008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:46:45.290539   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 11:46:45.321016   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13424.pem && ln -fs /usr/share/ca-certificates/13424.pem /etc/ssl/certs/13424.pem"
	I0318 11:46:45.350958   13008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13424.pem
	I0318 11:46:45.359394   13008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 11:25 /usr/share/ca-certificates/13424.pem
	I0318 11:46:45.372307   13008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13424.pem
	I0318 11:46:45.393477   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13424.pem /etc/ssl/certs/51391683.0"
	I0318 11:46:45.421369   13008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 11:46:45.426798   13008 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 11:46:45.426798   13008 kubeadm.go:391] StartCluster: {Name:ha-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clu
sterName:ha-747000 Namespace:default APIServerHAVIP:172.30.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.135.65 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:46:45.437183   13008 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 11:46:45.472976   13008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 11:46:45.498331   13008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 11:46:45.525240   13008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 11:46:45.540517   13008 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 11:46:45.540517   13008 kubeadm.go:156] found existing configuration files:
	
	I0318 11:46:45.551001   13008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 11:46:45.565933   13008 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 11:46:45.575791   13008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 11:46:45.604680   13008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 11:46:45.618892   13008 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 11:46:45.630737   13008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 11:46:45.656630   13008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 11:46:45.670944   13008 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 11:46:45.682841   13008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 11:46:45.711131   13008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 11:46:45.712984   13008 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 11:46:45.740301   13008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 11:46:45.755587   13008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 11:46:46.166066   13008 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 11:46:58.721146   13008 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 11:46:58.721352   13008 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 11:46:58.721597   13008 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 11:46:58.721844   13008 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 11:46:58.722194   13008 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 11:46:58.722353   13008 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 11:46:58.728031   13008 out.go:204]   - Generating certificates and keys ...
	I0318 11:46:58.728587   13008 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 11:46:58.728721   13008 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 11:46:58.728721   13008 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 11:46:58.728721   13008 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 11:46:58.729264   13008 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 11:46:58.729376   13008 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 11:46:58.729376   13008 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 11:46:58.729376   13008 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-747000 localhost] and IPs [172.30.135.65 127.0.0.1 ::1]
	I0318 11:46:58.729934   13008 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 11:46:58.729989   13008 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-747000 localhost] and IPs [172.30.135.65 127.0.0.1 ::1]
	I0318 11:46:58.729989   13008 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 11:46:58.729989   13008 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 11:46:58.729989   13008 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 11:46:58.729989   13008 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 11:46:58.729989   13008 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 11:46:58.729989   13008 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 11:46:58.729989   13008 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 11:46:58.731072   13008 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 11:46:58.731204   13008 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 11:46:58.731204   13008 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 11:46:58.733932   13008 out.go:204]   - Booting up control plane ...
	I0318 11:46:58.733932   13008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 11:46:58.733932   13008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 11:46:58.733932   13008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 11:46:58.733932   13008 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 11:46:58.733932   13008 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 11:46:58.735049   13008 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 11:46:58.735049   13008 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 11:46:58.735605   13008 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.584491 seconds
	I0318 11:46:58.735738   13008 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 11:46:58.735738   13008 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 11:46:58.735738   13008 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 11:46:58.735738   13008 kubeadm.go:309] [mark-control-plane] Marking the node ha-747000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 11:46:58.735738   13008 kubeadm.go:309] [bootstrap-token] Using token: 9x06m0.cfr1err7b224rbom
	I0318 11:46:58.739324   13008 out.go:204]   - Configuring RBAC rules ...
	I0318 11:46:58.739324   13008 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 11:46:58.739324   13008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 11:46:58.739324   13008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 11:46:58.740942   13008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 11:46:58.741015   13008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 11:46:58.741015   13008 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 11:46:58.741015   13008 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 11:46:58.741015   13008 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 11:46:58.741015   13008 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 11:46:58.741015   13008 kubeadm.go:309] 
	I0318 11:46:58.741015   13008 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 11:46:58.741015   13008 kubeadm.go:309] 
	I0318 11:46:58.741015   13008 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 11:46:58.741015   13008 kubeadm.go:309] 
	I0318 11:46:58.741015   13008 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 11:46:58.741015   13008 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 11:46:58.741015   13008 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 11:46:58.741015   13008 kubeadm.go:309] 
	I0318 11:46:58.741015   13008 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 11:46:58.741015   13008 kubeadm.go:309] 
	I0318 11:46:58.741015   13008 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 11:46:58.741015   13008 kubeadm.go:309] 
	I0318 11:46:58.741015   13008 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 11:46:58.741015   13008 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 11:46:58.741015   13008 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 11:46:58.741015   13008 kubeadm.go:309] 
	I0318 11:46:58.741015   13008 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 11:46:58.741015   13008 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 11:46:58.741015   13008 kubeadm.go:309] 
	I0318 11:46:58.741015   13008 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 9x06m0.cfr1err7b224rbom \
	I0318 11:46:58.741015   13008 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 \
	I0318 11:46:58.741015   13008 kubeadm.go:309] 	--control-plane 
	I0318 11:46:58.741015   13008 kubeadm.go:309] 
	I0318 11:46:58.741015   13008 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 11:46:58.741015   13008 kubeadm.go:309] 
	I0318 11:46:58.741015   13008 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 9x06m0.cfr1err7b224rbom \
	I0318 11:46:58.741015   13008 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 
	I0318 11:46:58.741015   13008 cni.go:84] Creating CNI manager for ""
	I0318 11:46:58.741015   13008 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 11:46:58.747645   13008 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0318 11:46:58.765701   13008 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0318 11:46:58.777338   13008 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0318 11:46:58.777403   13008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0318 11:46:58.839563   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0318 11:46:59.996446   13008 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.156875s)
	I0318 11:46:59.996677   13008 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 11:47:00.012090   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-747000 minikube.k8s.io/updated_at=2024_03_18T11_46_59_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=ha-747000 minikube.k8s.io/primary=true
	I0318 11:47:00.013236   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:00.043196   13008 ops.go:34] apiserver oom_adj: -16
	I0318 11:47:00.267514   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:00.777611   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:01.267098   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:01.768841   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:02.272827   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:02.789245   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:03.278739   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:03.771564   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:04.273472   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:04.773277   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:05.265686   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:05.782806   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:06.264657   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:06.770106   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:07.279524   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:07.773161   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:08.265477   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:08.765239   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:09.274047   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:09.771743   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:10.295450   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:10.770812   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:11.278601   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:11.773069   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:11.948247   13008 kubeadm.go:1107] duration metric: took 11.9514063s to wait for elevateKubeSystemPrivileges
	W0318 11:47:11.948247   13008 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 11:47:11.948247   13008 kubeadm.go:393] duration metric: took 26.5212528s to StartCluster
	I0318 11:47:11.948247   13008 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:47:11.948247   13008 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 11:47:11.950074   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:47:11.951355   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 11:47:11.951355   13008 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 11:47:11.951355   13008 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.30.135.65 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:47:11.951355   13008 start.go:240] waiting for startup goroutines ...
	I0318 11:47:11.951355   13008 addons.go:69] Setting storage-provisioner=true in profile "ha-747000"
	I0318 11:47:11.951355   13008 addons.go:234] Setting addon storage-provisioner=true in "ha-747000"
	I0318 11:47:11.951355   13008 host.go:66] Checking if "ha-747000" exists ...
	I0318 11:47:11.951355   13008 addons.go:69] Setting default-storageclass=true in profile "ha-747000"
	I0318 11:47:11.951355   13008 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:47:11.951355   13008 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-747000"
	I0318 11:47:11.952885   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:47:11.952885   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:47:12.100580   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.30.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 11:47:12.791374   13008 start.go:948] {"host.minikube.internal": 172.30.128.1} host record injected into CoreDNS's ConfigMap
	I0318 11:47:14.185805   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:47:14.191094   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:14.192015   13008 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 11:47:14.192783   13008 kapi.go:59] client config for ha-747000: &rest.Config{Host:"https://172.30.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-747000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-747000\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e8b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 11:47:14.194231   13008 cert_rotation.go:137] Starting client certificate rotation controller
	I0318 11:47:14.194283   13008 addons.go:234] Setting addon default-storageclass=true in "ha-747000"
	I0318 11:47:14.194283   13008 host.go:66] Checking if "ha-747000" exists ...
	I0318 11:47:14.195859   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:47:14.424311   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:47:14.431023   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:14.435126   13008 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 11:47:14.437925   13008 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 11:47:14.437925   13008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 11:47:14.437925   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:47:16.289811   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:47:16.299233   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:16.299297   13008 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 11:47:16.299297   13008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 11:47:16.299297   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:47:16.521709   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:47:16.521709   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:16.534246   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:47:18.454594   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:47:18.454594   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:18.456008   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:47:19.192429   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:47:19.192429   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:19.192429   13008 sshutil.go:53] new ssh client: &{IP:172.30.135.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\id_rsa Username:docker}
	I0318 11:47:19.322928   13008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 11:47:20.988781   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:47:20.999854   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:21.000055   13008 sshutil.go:53] new ssh client: &{IP:172.30.135.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\id_rsa Username:docker}
	I0318 11:47:21.128921   13008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 11:47:21.370845   13008 round_trippers.go:463] GET https://172.30.143.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0318 11:47:21.370931   13008 round_trippers.go:469] Request Headers:
	I0318 11:47:21.370931   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:47:21.371016   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:47:21.386557   13008 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0318 11:47:21.388813   13008 round_trippers.go:463] PUT https://172.30.143.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0318 11:47:21.388856   13008 round_trippers.go:469] Request Headers:
	I0318 11:47:21.388903   13008 round_trippers.go:473]     Content-Type: application/json
	I0318 11:47:21.388903   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:47:21.388964   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:47:21.396207   13008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:47:21.401037   13008 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0318 11:47:21.403388   13008 addons.go:505] duration metric: took 9.4519636s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0318 11:47:21.403966   13008 start.go:245] waiting for cluster config update ...
	I0318 11:47:21.403966   13008 start.go:254] writing updated cluster config ...
	I0318 11:47:21.406330   13008 out.go:177] 
	I0318 11:47:21.418274   13008 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:47:21.418274   13008 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json ...
	I0318 11:47:21.419043   13008 out.go:177] * Starting "ha-747000-m02" control-plane node in "ha-747000" cluster
	I0318 11:47:21.424695   13008 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:47:21.424695   13008 cache.go:56] Caching tarball of preloaded images
	I0318 11:47:21.427674   13008 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 11:47:21.427937   13008 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 11:47:21.427937   13008 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json ...
	I0318 11:47:21.430331   13008 start.go:360] acquireMachinesLock for ha-747000-m02: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 11:47:21.432041   13008 start.go:364] duration metric: took 1.7102ms to acquireMachinesLock for "ha-747000-m02"
	I0318 11:47:21.432238   13008 start.go:93] Provisioning new machine with config: &{Name:ha-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-747000 Namespace:default APIServerHAVIP:172.30.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.135.65 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:47:21.432238   13008 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0318 11:47:21.433784   13008 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 11:47:21.433784   13008 start.go:159] libmachine.API.Create for "ha-747000" (driver="hyperv")
	I0318 11:47:21.433784   13008 client.go:168] LocalClient.Create starting
	I0318 11:47:21.433784   13008 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0318 11:47:21.439122   13008 main.go:141] libmachine: Decoding PEM data...
	I0318 11:47:21.439122   13008 main.go:141] libmachine: Parsing certificate...
	I0318 11:47:21.439313   13008 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0318 11:47:21.439562   13008 main.go:141] libmachine: Decoding PEM data...
	I0318 11:47:21.439562   13008 main.go:141] libmachine: Parsing certificate...
	I0318 11:47:21.439794   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0318 11:47:23.241743   13008 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0318 11:47:23.241743   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:23.245380   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0318 11:47:24.909850   13008 main.go:141] libmachine: [stdout =====>] : False
	
	I0318 11:47:24.909850   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:24.917979   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:47:26.312968   13008 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:47:26.320652   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:26.320652   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:47:29.610445   13008 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:47:29.621431   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:29.623610   13008 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 11:47:30.021927   13008 main.go:141] libmachine: Creating SSH key...
	I0318 11:47:30.119601   13008 main.go:141] libmachine: Creating VM...
	I0318 11:47:30.119601   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:47:32.829630   13008 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:47:32.829630   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:32.841311   13008 main.go:141] libmachine: Using switch "Default Switch"
	I0318 11:47:32.841462   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:47:34.456936   13008 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:47:34.456936   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:34.456936   13008 main.go:141] libmachine: Creating VHD
	I0318 11:47:34.465658   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0318 11:47:37.929435   13008 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 85777A93-61BC-4A57-AF64-27F6DBAE651E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0318 11:47:37.939825   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:37.939825   13008 main.go:141] libmachine: Writing magic tar header
	I0318 11:47:37.939928   13008 main.go:141] libmachine: Writing SSH key tar header
	I0318 11:47:37.947351   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0318 11:47:40.935326   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:47:40.935326   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:40.935326   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\disk.vhd' -SizeBytes 20000MB
	I0318 11:47:43.287330   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:47:43.297272   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:43.297272   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-747000-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0318 11:47:46.626949   13008 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-747000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0318 11:47:46.626949   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:46.626949   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-747000-m02 -DynamicMemoryEnabled $false
	I0318 11:47:48.663588   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:47:48.673992   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:48.673992   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-747000-m02 -Count 2
	I0318 11:47:50.655418   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:47:50.655418   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:50.655418   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-747000-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\boot2docker.iso'
	I0318 11:47:53.022424   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:47:53.032384   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:53.032384   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-747000-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\disk.vhd'
	I0318 11:47:55.436382   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:47:55.447458   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:55.447458   13008 main.go:141] libmachine: Starting VM...
	I0318 11:47:55.447554   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-747000-m02
	I0318 11:47:58.279520   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:47:58.279520   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:58.279520   13008 main.go:141] libmachine: Waiting for host to start...
	I0318 11:47:58.279520   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:00.418030   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:00.418030   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:00.427738   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:02.893786   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:48:02.893880   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:03.901418   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:05.971486   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:05.972349   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:05.972477   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:08.337044   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:48:08.348388   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:09.357755   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:11.366189   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:11.366189   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:11.366189   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:13.807391   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:48:13.815174   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:14.816763   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:16.942187   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:16.942249   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:16.942317   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:19.365653   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:48:19.365653   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:20.390660   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:22.498634   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:22.510431   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:22.510431   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:24.898503   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:48:24.898646   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:24.898851   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:26.863828   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:26.863828   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:26.863828   13008 machine.go:94] provisionDockerMachine start ...
	I0318 11:48:26.863828   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:28.876801   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:28.887527   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:28.887527   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:31.170180   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:48:31.170180   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:31.176324   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:48:31.186067   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.142.66 22 <nil> <nil>}
	I0318 11:48:31.186067   13008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 11:48:31.314744   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 11:48:31.314744   13008 buildroot.go:166] provisioning hostname "ha-747000-m02"
	I0318 11:48:31.314744   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:33.296740   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:33.307821   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:33.307821   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:35.665966   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:48:35.676076   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:35.681418   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:48:35.681906   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.142.66 22 <nil> <nil>}
	I0318 11:48:35.682036   13008 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-747000-m02 && echo "ha-747000-m02" | sudo tee /etc/hostname
	I0318 11:48:35.834894   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-747000-m02
	
	I0318 11:48:35.834969   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:37.742111   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:37.753300   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:37.753300   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:40.051262   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:48:40.051262   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:40.057320   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:48:40.057320   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.142.66 22 <nil> <nil>}
	I0318 11:48:40.057320   13008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-747000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-747000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-747000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 11:48:40.210357   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 11:48:40.210357   13008 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0318 11:48:40.210357   13008 buildroot.go:174] setting up certificates
	I0318 11:48:40.210357   13008 provision.go:84] configureAuth start
	I0318 11:48:40.210357   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:42.143728   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:42.143728   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:42.153928   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:44.466152   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:48:44.466217   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:44.466273   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:46.400086   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:46.400086   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:46.400086   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:48.741017   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:48:48.741017   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:48.741017   13008 provision.go:143] copyHostCerts
	I0318 11:48:48.741017   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0318 11:48:48.741017   13008 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0318 11:48:48.741017   13008 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0318 11:48:48.741857   13008 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 11:48:48.742578   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0318 11:48:48.743194   13008 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0318 11:48:48.743194   13008 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0318 11:48:48.743194   13008 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 11:48:48.744449   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0318 11:48:48.744449   13008 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0318 11:48:48.744449   13008 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0318 11:48:48.745111   13008 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 11:48:48.745979   13008 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-747000-m02 san=[127.0.0.1 172.30.142.66 ha-747000-m02 localhost minikube]
	I0318 11:48:48.933220   13008 provision.go:177] copyRemoteCerts
	I0318 11:48:48.949629   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 11:48:48.950236   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:50.876492   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:50.876492   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:50.876492   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:53.206803   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:48:53.217963   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:53.217963   13008 sshutil.go:53] new ssh client: &{IP:172.30.142.66 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\id_rsa Username:docker}
	I0318 11:48:53.320939   13008 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3711978s)
	I0318 11:48:53.320939   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 11:48:53.321210   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 11:48:53.361596   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 11:48:53.361596   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 11:48:53.402212   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 11:48:53.402212   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 11:48:53.443056   13008 provision.go:87] duration metric: took 13.2326002s to configureAuth
	I0318 11:48:53.443056   13008 buildroot.go:189] setting minikube options for container-runtime
	I0318 11:48:53.443602   13008 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:48:53.443602   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:55.383567   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:55.383567   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:55.383567   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:57.688551   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:48:57.698478   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:57.704346   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:48:57.704922   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.142.66 22 <nil> <nil>}
	I0318 11:48:57.704922   13008 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 11:48:57.835932   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 11:48:57.835932   13008 buildroot.go:70] root file system type: tmpfs
	I0318 11:48:57.835932   13008 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 11:48:57.836468   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:59.788249   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:59.799303   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:59.799303   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:02.095164   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:49:02.106334   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:02.111961   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:49:02.112585   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.142.66 22 <nil> <nil>}
	I0318 11:49:02.112585   13008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.30.135.65"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 11:49:02.267694   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.30.135.65
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 11:49:02.267799   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:49:04.200510   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:04.200510   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:04.200510   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:06.507794   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:49:06.518671   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:06.523731   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:49:06.524338   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.142.66 22 <nil> <nil>}
	I0318 11:49:06.524338   13008 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 11:49:08.574979   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 11:49:08.574979   13008 machine.go:97] duration metric: took 41.7108407s to provisionDockerMachine
	I0318 11:49:08.574979   13008 client.go:171] duration metric: took 1m47.1404004s to LocalClient.Create
	I0318 11:49:08.574979   13008 start.go:167] duration metric: took 1m47.1404004s to libmachine.API.Create "ha-747000"
	I0318 11:49:08.574979   13008 start.go:293] postStartSetup for "ha-747000-m02" (driver="hyperv")
	I0318 11:49:08.574979   13008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 11:49:08.587005   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 11:49:08.587005   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:49:10.545043   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:10.556151   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:10.556225   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:12.868871   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:49:12.878662   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:12.878662   13008 sshutil.go:53] new ssh client: &{IP:172.30.142.66 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\id_rsa Username:docker}
	I0318 11:49:12.985497   13008 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3984601s)
	I0318 11:49:12.998104   13008 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 11:49:13.003847   13008 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 11:49:13.003918   13008 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0318 11:49:13.004344   13008 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0318 11:49:13.005467   13008 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> 134242.pem in /etc/ssl/certs
	I0318 11:49:13.005467   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /etc/ssl/certs/134242.pem
	I0318 11:49:13.015496   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 11:49:13.034173   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /etc/ssl/certs/134242.pem (1708 bytes)
	I0318 11:49:13.072875   13008 start.go:296] duration metric: took 4.4978625s for postStartSetup
	I0318 11:49:13.075108   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:49:14.963268   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:14.963268   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:14.973813   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:17.304030   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:49:17.304030   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:17.304030   13008 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json ...
	I0318 11:49:17.306870   13008 start.go:128] duration metric: took 1m55.8737728s to createHost
	I0318 11:49:17.306939   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:49:19.237158   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:19.248098   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:19.248098   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:21.621731   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:49:21.621731   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:21.627521   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:49:21.628029   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.142.66 22 <nil> <nil>}
	I0318 11:49:21.628078   13008 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 11:49:21.758303   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710762561.749231924
	
	I0318 11:49:21.758303   13008 fix.go:216] guest clock: 1710762561.749231924
	I0318 11:49:21.758303   13008 fix.go:229] Guest: 2024-03-18 11:49:21.749231924 +0000 UTC Remote: 2024-03-18 11:49:17.306939 +0000 UTC m=+310.428658701 (delta=4.442292924s)
	I0318 11:49:21.758303   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:49:23.680216   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:23.680216   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:23.680216   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:26.054466   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:49:26.054466   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:26.059768   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:49:26.059768   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.142.66 22 <nil> <nil>}
	I0318 11:49:26.059768   13008 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710762561
	I0318 11:49:26.199954   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 11:49:21 UTC 2024
	
	I0318 11:49:26.199954   13008 fix.go:236] clock set: Mon Mar 18 11:49:21 UTC 2024
	 (err=<nil>)
	I0318 11:49:26.199954   13008 start.go:83] releasing machines lock for "ha-747000-m02", held for 2m4.7669278s
	I0318 11:49:26.200494   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:49:28.185140   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:28.195956   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:28.195956   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:30.534919   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:49:30.534919   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:30.539671   13008 out.go:177] * Found network options:
	I0318 11:49:30.542325   13008 out.go:177]   - NO_PROXY=172.30.135.65
	W0318 11:49:30.545859   13008 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 11:49:30.548115   13008 out.go:177]   - NO_PROXY=172.30.135.65
	W0318 11:49:30.550771   13008 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 11:49:30.552346   13008 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 11:49:30.554165   13008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 11:49:30.554165   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:49:30.558893   13008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 11:49:30.558893   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:49:32.640676   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:32.640774   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:32.640877   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:32.655434   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:32.655434   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:32.666019   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:35.152622   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:49:35.152688   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:35.152688   13008 sshutil.go:53] new ssh client: &{IP:172.30.142.66 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\id_rsa Username:docker}
	I0318 11:49:35.175099   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:49:35.175970   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:35.176131   13008 sshutil.go:53] new ssh client: &{IP:172.30.142.66 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\id_rsa Username:docker}
	I0318 11:49:35.254529   13008 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.6956007s)
	W0318 11:49:35.254529   13008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 11:49:35.264883   13008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 11:49:35.340182   13008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 11:49:35.340260   13008 start.go:494] detecting cgroup driver to use...
	I0318 11:49:35.340260   13008 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7860594s)
	I0318 11:49:35.340317   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:49:35.383616   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 11:49:35.410552   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 11:49:35.430523   13008 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 11:49:35.442039   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 11:49:35.475005   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:49:35.503043   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 11:49:35.530578   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:49:35.558167   13008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 11:49:35.587243   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 11:49:35.614117   13008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 11:49:35.642838   13008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 11:49:35.668763   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:49:35.846712   13008 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 11:49:35.868283   13008 start.go:494] detecting cgroup driver to use...
	I0318 11:49:35.886281   13008 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 11:49:35.922058   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:49:35.957657   13008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 11:49:35.991279   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:49:36.020899   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:49:36.052135   13008 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 11:49:36.109553   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:49:36.129370   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:49:36.172907   13008 ssh_runner.go:195] Run: which cri-dockerd
	I0318 11:49:36.192183   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 11:49:36.209421   13008 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 11:49:36.248821   13008 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 11:49:36.432899   13008 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 11:49:36.594198   13008 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 11:49:36.594315   13008 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 11:49:36.634355   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:49:36.812595   13008 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 11:49:39.267567   13008 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4549536s)
	I0318 11:49:39.278897   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 11:49:39.312980   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:49:39.344395   13008 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 11:49:39.525145   13008 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 11:49:39.704346   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:49:39.880091   13008 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 11:49:39.919931   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:49:39.952216   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:49:40.132668   13008 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 11:49:40.222207   13008 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 11:49:40.234538   13008 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 11:49:40.242482   13008 start.go:562] Will wait 60s for crictl version
	I0318 11:49:40.255640   13008 ssh_runner.go:195] Run: which crictl
	I0318 11:49:40.272215   13008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 11:49:40.334895   13008 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 11:49:40.344173   13008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:49:40.384018   13008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:49:40.414791   13008 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 11:49:40.418697   13008 out.go:177]   - env NO_PROXY=172.30.135.65
	I0318 11:49:40.423153   13008 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 11:49:40.423981   13008 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 11:49:40.423981   13008 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 11:49:40.423981   13008 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 11:49:40.423981   13008 ip.go:207] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:93:df:b5 Flags:up|broadcast|multicast|running}
	I0318 11:49:40.428649   13008 ip.go:210] interface addr: fe80::572b:6b1d:9130:e88b/64
	I0318 11:49:40.430755   13008 ip.go:210] interface addr: 172.30.128.1/20
	I0318 11:49:40.441310   13008 ssh_runner.go:195] Run: grep 172.30.128.1	host.minikube.internal$ /etc/hosts
	I0318 11:49:40.447624   13008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:49:40.467778   13008 mustload.go:65] Loading cluster: ha-747000
	I0318 11:49:40.467963   13008 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:49:40.469296   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:49:42.394146   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:42.394146   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:42.404346   13008 host.go:66] Checking if "ha-747000" exists ...
	I0318 11:49:42.405092   13008 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000 for IP: 172.30.142.66
	I0318 11:49:42.405176   13008 certs.go:194] generating shared ca certs ...
	I0318 11:49:42.405212   13008 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:49:42.405897   13008 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0318 11:49:42.405962   13008 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0318 11:49:42.405962   13008 certs.go:256] generating profile certs ...
	I0318 11:49:42.406648   13008 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\client.key
	I0318 11:49:42.406648   13008 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.844f64ff
	I0318 11:49:42.407216   13008 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.844f64ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.30.135.65 172.30.142.66 172.30.143.254]
	I0318 11:49:42.591721   13008 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.844f64ff ...
	I0318 11:49:42.591721   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.844f64ff: {Name:mka0c3023f55cd67808646a89c803406b9c3a603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:49:42.593983   13008 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.844f64ff ...
	I0318 11:49:42.593983   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.844f64ff: {Name:mk5c488359d1da51964d69987f1d7686b43c00ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:49:42.595244   13008 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.844f64ff -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt
	I0318 11:49:42.601951   13008 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.844f64ff -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key
	I0318 11:49:42.607583   13008 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key
	I0318 11:49:42.607583   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 11:49:42.607583   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 11:49:42.608906   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 11:49:42.609073   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 11:49:42.609280   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 11:49:42.609407   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 11:49:42.609547   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 11:49:42.609739   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 11:49:42.609895   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem (1338 bytes)
	W0318 11:49:42.609895   13008 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424_empty.pem, impossibly tiny 0 bytes
	I0318 11:49:42.610492   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0318 11:49:42.610694   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 11:49:42.610890   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 11:49:42.611135   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 11:49:42.611344   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem (1708 bytes)
	I0318 11:49:42.611344   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /usr/share/ca-certificates/134242.pem
	I0318 11:49:42.611344   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:49:42.611344   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem -> /usr/share/ca-certificates/13424.pem
	I0318 11:49:42.611344   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:49:44.564835   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:44.564835   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:44.564990   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:46.873791   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:49:46.873860   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:46.873860   13008 sshutil.go:53] new ssh client: &{IP:172.30.135.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\id_rsa Username:docker}
	I0318 11:49:46.967932   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0318 11:49:46.977081   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0318 11:49:47.005206   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0318 11:49:47.012342   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0318 11:49:47.039854   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0318 11:49:47.047533   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0318 11:49:47.077538   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0318 11:49:47.087307   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0318 11:49:47.118861   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0318 11:49:47.124469   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0318 11:49:47.151958   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0318 11:49:47.158010   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0318 11:49:47.175620   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 11:49:47.217430   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0318 11:49:47.250496   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 11:49:47.293863   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 11:49:47.332484   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0318 11:49:47.369699   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 11:49:47.414479   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 11:49:47.453031   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 11:49:47.483941   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /usr/share/ca-certificates/134242.pem (1708 bytes)
	I0318 11:49:47.530332   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 11:49:47.568018   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem --> /usr/share/ca-certificates/13424.pem (1338 bytes)
	I0318 11:49:47.608147   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0318 11:49:47.634776   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0318 11:49:47.661028   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0318 11:49:47.686896   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0318 11:49:47.714765   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0318 11:49:47.745317   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0318 11:49:47.771286   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0318 11:49:47.809263   13008 ssh_runner.go:195] Run: openssl version
	I0318 11:49:47.828992   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 11:49:47.856917   13008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:49:47.862807   13008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 11:07 /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:49:47.873148   13008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:49:47.892479   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 11:49:47.922923   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13424.pem && ln -fs /usr/share/ca-certificates/13424.pem /etc/ssl/certs/13424.pem"
	I0318 11:49:47.949932   13008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13424.pem
	I0318 11:49:47.956579   13008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 11:25 /usr/share/ca-certificates/13424.pem
	I0318 11:49:47.967745   13008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13424.pem
	I0318 11:49:47.989110   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13424.pem /etc/ssl/certs/51391683.0"
	I0318 11:49:48.018909   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134242.pem && ln -fs /usr/share/ca-certificates/134242.pem /etc/ssl/certs/134242.pem"
	I0318 11:49:48.046501   13008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134242.pem
	I0318 11:49:48.053603   13008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 11:25 /usr/share/ca-certificates/134242.pem
	I0318 11:49:48.065518   13008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134242.pem
	I0318 11:49:48.082933   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134242.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 11:49:48.114121   13008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 11:49:48.121611   13008 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 11:49:48.121907   13008 kubeadm.go:928] updating node {m02 172.30.142.66 8443 v1.28.4 docker true true} ...
	I0318 11:49:48.122037   13008 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-747000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.142.66
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-747000 Namespace:default APIServerHAVIP:172.30.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 11:49:48.122148   13008 kube-vip.go:111] generating kube-vip config ...
	I0318 11:49:48.133390   13008 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 11:49:48.155414   13008 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 11:49:48.155538   13008 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.30.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 11:49:48.167324   13008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 11:49:48.180547   13008 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 11:49:48.193361   13008 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 11:49:48.212708   13008 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl
	I0318 11:49:48.212781   13008 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm
	I0318 11:49:48.212781   13008 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet
	I0318 11:49:49.324334   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 11:49:49.330426   13008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 11:49:49.340889   13008 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 11:49:49.344319   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 11:49:49.393348   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 11:49:49.403652   13008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 11:49:49.452787   13008 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 11:49:49.452903   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 11:49:50.014334   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 11:49:50.086806   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 11:49:50.107599   13008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 11:49:50.118607   13008 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 11:49:50.118607   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 11:49:50.895984   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0318 11:49:50.911163   13008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0318 11:49:50.944031   13008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 11:49:50.971126   13008 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 11:49:51.013861   13008 ssh_runner.go:195] Run: grep 172.30.143.254	control-plane.minikube.internal$ /etc/hosts
	I0318 11:49:51.021766   13008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:49:51.051001   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:49:51.226569   13008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:49:51.254937   13008 host.go:66] Checking if "ha-747000" exists ...
	I0318 11:49:51.255822   13008 start.go:316] joinCluster: &{Name:ha-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-747000 Namespace:default APIServerHAVIP:172.30.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.135.65 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.30.142.66 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:49:51.256096   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 11:49:51.256193   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:49:53.202549   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:53.202549   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:53.202721   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:55.537453   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:49:55.537453   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:55.549046   13008 sshutil.go:53] new ssh client: &{IP:172.30.135.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\id_rsa Username:docker}
	I0318 11:49:55.723791   13008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.4674449s)
	I0318 11:49:55.723791   13008 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.30.142.66 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:49:55.723791   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rv7b18.lo2thrhz9e2j77m6 --discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-747000-m02 --control-plane --apiserver-advertise-address=172.30.142.66 --apiserver-bind-port=8443"
	I0318 11:50:52.332075   13008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rv7b18.lo2thrhz9e2j77m6 --discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-747000-m02 --control-plane --apiserver-advertise-address=172.30.142.66 --apiserver-bind-port=8443": (56.6078596s)
	I0318 11:50:52.332075   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 11:50:52.913432   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-747000-m02 minikube.k8s.io/updated_at=2024_03_18T11_50_52_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=ha-747000 minikube.k8s.io/primary=false
	I0318 11:50:53.099383   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-747000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0318 11:50:53.254858   13008 start.go:318] duration metric: took 1m1.9985711s to joinCluster
	I0318 11:50:53.254858   13008 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.30.142.66 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:50:53.259065   13008 out.go:177] * Verifying Kubernetes components...
	I0318 11:50:53.256355   13008 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:50:53.273143   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:50:53.632930   13008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:50:53.681348   13008 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 11:50:53.681348   13008 kapi.go:59] client config for ha-747000: &rest.Config{Host:"https://172.30.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-747000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-747000\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e8b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0318 11:50:53.681348   13008 kubeadm.go:477] Overriding stale ClientConfig host https://172.30.143.254:8443 with https://172.30.135.65:8443
	I0318 11:50:53.683006   13008 node_ready.go:35] waiting up to 6m0s for node "ha-747000-m02" to be "Ready" ...
	I0318 11:50:53.683006   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:53.683006   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:53.683006   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:53.683006   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:53.694802   13008 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 11:50:54.198151   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:54.198151   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:54.198151   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:54.198151   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:54.205120   13008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:50:54.686086   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:54.686086   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:54.686086   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:54.686086   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:54.691369   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:50:55.196398   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:55.196485   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:55.196485   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:55.196558   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:55.197317   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:50:55.713007   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:55.713007   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:55.713007   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:55.713007   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:55.714601   13008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:50:55.729171   13008 node_ready.go:53] node "ha-747000-m02" has status "Ready":"False"
	I0318 11:50:56.192332   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:56.192332   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:56.192332   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:56.192332   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:56.192899   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:50:56.700495   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:56.700549   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:56.700549   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:56.700549   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:56.703766   13008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:50:57.185702   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:57.185702   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:57.185702   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:57.185702   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:57.186285   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:50:57.701100   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:57.701100   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:57.701100   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:57.701100   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:57.701668   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:50:58.195940   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:58.195940   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:58.195940   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:58.195940   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:58.198123   13008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:50:58.201194   13008 node_ready.go:53] node "ha-747000-m02" has status "Ready":"False"
	I0318 11:50:58.695938   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:58.695938   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:58.695938   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:58.695938   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:58.697529   13008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:50:59.209025   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:59.209025   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:59.209025   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:59.209025   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:59.211263   13008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:50:59.685270   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:59.685270   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:59.685270   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:59.685270   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:59.711426   13008 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0318 11:51:00.194976   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:00.195089   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:00.195089   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:00.195089   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:00.200028   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:51:00.699861   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:00.699861   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:00.699942   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:00.699942   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:00.708296   13008 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 11:51:00.708956   13008 node_ready.go:53] node "ha-747000-m02" has status "Ready":"False"
	I0318 11:51:01.191823   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:01.191823   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:01.191823   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:01.191823   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:01.196736   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:51:01.691082   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:01.691138   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:01.691138   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:01.691138   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:01.694486   13008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:51:02.195380   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:02.195380   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:02.195380   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:02.195380   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:02.196588   13008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:51:02.690325   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:02.690494   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:02.690494   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:02.690494   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:02.697627   13008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:51:03.195227   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:03.195227   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:03.195227   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:03.195227   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:03.202502   13008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:51:03.202856   13008 node_ready.go:53] node "ha-747000-m02" has status "Ready":"False"
	I0318 11:51:03.692931   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:03.692931   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:03.693021   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:03.693021   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:03.700007   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:04.185529   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:04.185529   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:04.185529   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:04.185529   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:04.186229   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:04.685057   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:04.685089   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:04.685089   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:04.685089   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:04.687520   13008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:51:05.195824   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:05.196232   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.196232   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.196232   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.196774   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:05.696209   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:05.696209   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.696209   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.696209   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.696745   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:05.702953   13008 node_ready.go:49] node "ha-747000-m02" has status "Ready":"True"
	I0318 11:51:05.703060   13008 node_ready.go:38] duration metric: took 12.0199638s for node "ha-747000-m02" to be "Ready" ...
	I0318 11:51:05.703060   13008 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:51:05.703250   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods
	I0318 11:51:05.703250   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.703250   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.703250   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.714566   13008 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 11:51:05.725230   13008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dhl7r" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:05.725230   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dhl7r
	I0318 11:51:05.725230   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.725230   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.725230   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.729700   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:51:05.730602   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:05.730602   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.730602   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.730602   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.735962   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:51:05.736633   13008 pod_ready.go:92] pod "coredns-5dd5756b68-dhl7r" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:05.736633   13008 pod_ready.go:81] duration metric: took 11.4025ms for pod "coredns-5dd5756b68-dhl7r" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:05.736633   13008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gm84s" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:05.736633   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-gm84s
	I0318 11:51:05.736633   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.736633   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.736633   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.741664   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:51:05.742312   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:05.742312   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.742312   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.742312   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.744529   13008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:51:05.746990   13008 pod_ready.go:92] pod "coredns-5dd5756b68-gm84s" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:05.746990   13008 pod_ready.go:81] duration metric: took 10.3577ms for pod "coredns-5dd5756b68-gm84s" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:05.746990   13008 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:05.746990   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-747000
	I0318 11:51:05.746990   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.746990   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.746990   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.748262   13008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:51:05.752773   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:05.752773   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.752773   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.752773   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.753049   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:05.758101   13008 pod_ready.go:92] pod "etcd-ha-747000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:05.758101   13008 pod_ready.go:81] duration metric: took 11.1105ms for pod "etcd-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:05.758101   13008 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:05.758246   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-747000-m02
	I0318 11:51:05.758246   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.758300   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.758300   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.762686   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:51:05.763353   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:05.763379   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.763379   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.763379   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.763943   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:05.766985   13008 pod_ready.go:92] pod "etcd-ha-747000-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:05.766985   13008 pod_ready.go:81] duration metric: took 8.8842ms for pod "etcd-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:05.766985   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:05.903616   13008 request.go:629] Waited for 135.739ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000
	I0318 11:51:05.903728   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000
	I0318 11:51:05.903728   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.903728   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.903728   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.910468   13008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:51:06.102371   13008 request.go:629] Waited for 190.9299ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:06.102371   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:06.102371   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:06.102371   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:06.102371   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:06.107240   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:51:06.107569   13008 pod_ready.go:92] pod "kube-apiserver-ha-747000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:06.108092   13008 pod_ready.go:81] duration metric: took 341.1041ms for pod "kube-apiserver-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:06.108092   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:06.310953   13008 request.go:629] Waited for 202.3999ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m02
	I0318 11:51:06.311182   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m02
	I0318 11:51:06.311255   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:06.311255   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:06.311255   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:06.312021   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:06.509881   13008 request.go:629] Waited for 193.2234ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:06.510119   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:06.510148   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:06.510148   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:06.510148   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:06.512520   13008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:51:06.516142   13008 pod_ready.go:92] pod "kube-apiserver-ha-747000-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:06.516240   13008 pod_ready.go:81] duration metric: took 408.145ms for pod "kube-apiserver-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:06.516240   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:06.700909   13008 request.go:629] Waited for 184.332ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-747000
	I0318 11:51:06.701018   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-747000
	I0318 11:51:06.701018   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:06.701018   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:06.701018   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:06.701970   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:06.907496   13008 request.go:629] Waited for 200.1456ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:06.907496   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:06.907496   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:06.907496   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:06.907496   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:06.908276   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:06.913728   13008 pod_ready.go:92] pod "kube-controller-manager-ha-747000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:06.913728   13008 pod_ready.go:81] duration metric: took 397.4852ms for pod "kube-controller-manager-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:06.913728   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:07.106352   13008 request.go:629] Waited for 191.8377ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-747000-m02
	I0318 11:51:07.106504   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-747000-m02
	I0318 11:51:07.106504   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:07.106504   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:07.106504   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:07.107219   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:07.302972   13008 request.go:629] Waited for 190.3631ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:07.303046   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:07.303046   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:07.303046   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:07.303111   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:07.303835   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:07.308905   13008 pod_ready.go:92] pod "kube-controller-manager-ha-747000-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:07.308970   13008 pod_ready.go:81] duration metric: took 395.2384ms for pod "kube-controller-manager-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:07.308970   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lp986" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:07.503281   13008 request.go:629] Waited for 194.088ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lp986
	I0318 11:51:07.503374   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lp986
	I0318 11:51:07.503483   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:07.503541   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:07.503541   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:07.503734   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:07.698195   13008 request.go:629] Waited for 188.1492ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:07.698437   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:07.698539   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:07.698539   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:07.698565   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:07.699192   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:07.705153   13008 pod_ready.go:92] pod "kube-proxy-lp986" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:07.705237   13008 pod_ready.go:81] duration metric: took 396.1805ms for pod "kube-proxy-lp986" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:07.705237   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zzg5q" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:07.906523   13008 request.go:629] Waited for 200.9409ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zzg5q
	I0318 11:51:07.906712   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zzg5q
	I0318 11:51:07.906712   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:07.906712   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:07.906712   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:07.906980   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:08.100532   13008 request.go:629] Waited for 187.745ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:08.100884   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:08.100938   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:08.100978   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:08.100978   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:08.101667   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:08.106641   13008 pod_ready.go:92] pod "kube-proxy-zzg5q" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:08.107270   13008 pod_ready.go:81] duration metric: took 402.0303ms for pod "kube-proxy-zzg5q" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:08.107270   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:08.300333   13008 request.go:629] Waited for 192.8372ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-747000
	I0318 11:51:08.300575   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-747000
	I0318 11:51:08.300575   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:08.300701   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:08.300701   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:08.300963   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:08.508266   13008 request.go:629] Waited for 201.778ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:08.508357   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:08.508357   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:08.508357   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:08.508357   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:08.508579   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:08.513170   13008 pod_ready.go:92] pod "kube-scheduler-ha-747000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:08.513170   13008 pod_ready.go:81] duration metric: took 405.8968ms for pod "kube-scheduler-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:08.513170   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:08.704091   13008 request.go:629] Waited for 190.7091ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-747000-m02
	I0318 11:51:08.704363   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-747000-m02
	I0318 11:51:08.704460   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:08.704460   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:08.704524   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:08.709791   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:08.910823   13008 request.go:629] Waited for 200.4155ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:08.910950   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:08.911151   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:08.911151   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:08.911203   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:08.911527   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:08.917011   13008 pod_ready.go:92] pod "kube-scheduler-ha-747000-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:08.917011   13008 pod_ready.go:81] duration metric: took 403.8382ms for pod "kube-scheduler-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:08.917011   13008 pod_ready.go:38] duration metric: took 3.2139273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:51:08.917011   13008 api_server.go:52] waiting for apiserver process to appear ...
	I0318 11:51:08.932315   13008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 11:51:08.955570   13008 api_server.go:72] duration metric: took 15.700594s to wait for apiserver process to appear ...
	I0318 11:51:08.955570   13008 api_server.go:88] waiting for apiserver healthz status ...
	I0318 11:51:08.955570   13008 api_server.go:253] Checking apiserver healthz at https://172.30.135.65:8443/healthz ...
	I0318 11:51:08.963864   13008 api_server.go:279] https://172.30.135.65:8443/healthz returned 200:
	ok
	I0318 11:51:08.963918   13008 round_trippers.go:463] GET https://172.30.135.65:8443/version
	I0318 11:51:08.963918   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:08.963918   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:08.963918   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:08.964606   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:08.965945   13008 api_server.go:141] control plane version: v1.28.4
	I0318 11:51:08.966098   13008 api_server.go:131] duration metric: took 10.5281ms to wait for apiserver health ...
	I0318 11:51:08.966098   13008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 11:51:09.108095   13008 request.go:629] Waited for 141.7019ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods
	I0318 11:51:09.108281   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods
	I0318 11:51:09.108281   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:09.108281   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:09.108381   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:09.118382   13008 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 11:51:09.125194   13008 system_pods.go:59] 17 kube-system pods found
	I0318 11:51:09.125279   13008 system_pods.go:61] "coredns-5dd5756b68-dhl7r" [26fc5ceb-a1e3-46f5-b148-40f5312b8628] Running
	I0318 11:51:09.125279   13008 system_pods.go:61] "coredns-5dd5756b68-gm84s" [227e2616-d98e-4ca1-aa66-7e2ed6096d33] Running
	I0318 11:51:09.125279   13008 system_pods.go:61] "etcd-ha-747000" [ac7df0ab-4112-40ce-9646-08b806025178] Running
	I0318 11:51:09.125279   13008 system_pods.go:61] "etcd-ha-747000-m02" [0fb29c85-0d84-4aa9-8f40-041b128f3b62] Running
	I0318 11:51:09.125279   13008 system_pods.go:61] "kindnet-czdhw" [b8b10f3a-4cc3-43d0-a192-438de10116b1] Running
	I0318 11:51:09.125279   13008 system_pods.go:61] "kindnet-zt7pd" [76ca473c-8611-410d-9f13-52d1bfcec7f6] Running
	I0318 11:51:09.125279   13008 system_pods.go:61] "kube-apiserver-ha-747000" [acfdbb0b-15ce-407b-a8c9-bab5477bb1ae] Running
	I0318 11:51:09.125279   13008 system_pods.go:61] "kube-apiserver-ha-747000-m02" [f3b62c80-17e7-435a-9ffd-a2b727a702cd] Running
	I0318 11:51:09.125380   13008 system_pods.go:61] "kube-controller-manager-ha-747000" [7a40710c-8acc-4457-93ba-a31a4073be21] Running
	I0318 11:51:09.125380   13008 system_pods.go:61] "kube-controller-manager-ha-747000-m02" [e191ed3e-7566-4f60-baa0-957f4d7522cc] Running
	I0318 11:51:09.125380   13008 system_pods.go:61] "kube-proxy-lp986" [8567319d-191a-47b4-a5b6-f615907650d3] Running
	I0318 11:51:09.125380   13008 system_pods.go:61] "kube-proxy-zzg5q" [c9403c4f-57d4-4a1c-b629-b8c3f53db9c9] Running
	I0318 11:51:09.125380   13008 system_pods.go:61] "kube-scheduler-ha-747000" [fbfffede-3da2-45f0-abbb-28a58072068b] Running
	I0318 11:51:09.125380   13008 system_pods.go:61] "kube-scheduler-ha-747000-m02" [cdc9ff95-abdf-4913-9e30-dbc8f2785f60] Running
	I0318 11:51:09.125449   13008 system_pods.go:61] "kube-vip-ha-747000" [3e440bfd-5dc2-4981-a26d-c0ae17c250d5] Running
	I0318 11:51:09.125449   13008 system_pods.go:61] "kube-vip-ha-747000-m02" [992c820a-4b58-4baa-b6e4-f0c19a65ad85] Running
	I0318 11:51:09.125449   13008 system_pods.go:61] "storage-provisioner" [de0ac48e-e3cb-430d-8dc6-6e52d350a38e] Running
	I0318 11:51:09.125449   13008 system_pods.go:74] duration metric: took 159.3495ms to wait for pod list to return data ...
	I0318 11:51:09.125449   13008 default_sa.go:34] waiting for default service account to be created ...
	I0318 11:51:09.318977   13008 request.go:629] Waited for 193.2357ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/default/serviceaccounts
	I0318 11:51:09.319056   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/default/serviceaccounts
	I0318 11:51:09.319056   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:09.319056   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:09.319125   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:09.320086   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:09.324309   13008 default_sa.go:45] found service account: "default"
	I0318 11:51:09.324309   13008 default_sa.go:55] duration metric: took 198.8593ms for default service account to be created ...
	I0318 11:51:09.324309   13008 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 11:51:09.496705   13008 request.go:629] Waited for 172.3163ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods
	I0318 11:51:09.497058   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods
	I0318 11:51:09.497157   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:09.497157   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:09.497157   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:09.497921   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:09.513250   13008 system_pods.go:86] 17 kube-system pods found
	I0318 11:51:09.513250   13008 system_pods.go:89] "coredns-5dd5756b68-dhl7r" [26fc5ceb-a1e3-46f5-b148-40f5312b8628] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "coredns-5dd5756b68-gm84s" [227e2616-d98e-4ca1-aa66-7e2ed6096d33] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "etcd-ha-747000" [ac7df0ab-4112-40ce-9646-08b806025178] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "etcd-ha-747000-m02" [0fb29c85-0d84-4aa9-8f40-041b128f3b62] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kindnet-czdhw" [b8b10f3a-4cc3-43d0-a192-438de10116b1] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kindnet-zt7pd" [76ca473c-8611-410d-9f13-52d1bfcec7f6] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kube-apiserver-ha-747000" [acfdbb0b-15ce-407b-a8c9-bab5477bb1ae] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kube-apiserver-ha-747000-m02" [f3b62c80-17e7-435a-9ffd-a2b727a702cd] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kube-controller-manager-ha-747000" [7a40710c-8acc-4457-93ba-a31a4073be21] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kube-controller-manager-ha-747000-m02" [e191ed3e-7566-4f60-baa0-957f4d7522cc] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kube-proxy-lp986" [8567319d-191a-47b4-a5b6-f615907650d3] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kube-proxy-zzg5q" [c9403c4f-57d4-4a1c-b629-b8c3f53db9c9] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kube-scheduler-ha-747000" [fbfffede-3da2-45f0-abbb-28a58072068b] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kube-scheduler-ha-747000-m02" [cdc9ff95-abdf-4913-9e30-dbc8f2785f60] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kube-vip-ha-747000" [3e440bfd-5dc2-4981-a26d-c0ae17c250d5] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kube-vip-ha-747000-m02" [992c820a-4b58-4baa-b6e4-f0c19a65ad85] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "storage-provisioner" [de0ac48e-e3cb-430d-8dc6-6e52d350a38e] Running
	I0318 11:51:09.513250   13008 system_pods.go:126] duration metric: took 188.9394ms to wait for k8s-apps to be running ...
	I0318 11:51:09.513250   13008 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 11:51:09.524281   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 11:51:09.550062   13008 system_svc.go:56] duration metric: took 36.8113ms WaitForService to wait for kubelet
	I0318 11:51:09.550062   13008 kubeadm.go:576] duration metric: took 16.2950816s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 11:51:09.550062   13008 node_conditions.go:102] verifying NodePressure condition ...
	I0318 11:51:09.707434   13008 request.go:629] Waited for 157.3708ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes
	I0318 11:51:09.707807   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes
	I0318 11:51:09.707807   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:09.707807   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:09.707807   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:09.708522   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:09.714542   13008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:51:09.714604   13008 node_conditions.go:123] node cpu capacity is 2
	I0318 11:51:09.714604   13008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:51:09.714694   13008 node_conditions.go:123] node cpu capacity is 2
	I0318 11:51:09.714694   13008 node_conditions.go:105] duration metric: took 164.631ms to run NodePressure ...
	I0318 11:51:09.714694   13008 start.go:240] waiting for startup goroutines ...
	I0318 11:51:09.714740   13008 start.go:254] writing updated cluster config ...
	I0318 11:51:09.719533   13008 out.go:177] 
	I0318 11:51:09.729311   13008 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:51:09.729311   13008 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json ...
	I0318 11:51:09.730176   13008 out.go:177] * Starting "ha-747000-m03" control-plane node in "ha-747000" cluster
	I0318 11:51:09.735466   13008 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:51:09.735466   13008 cache.go:56] Caching tarball of preloaded images
	I0318 11:51:09.737723   13008 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 11:51:09.738000   13008 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 11:51:09.738173   13008 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json ...
	I0318 11:51:09.741236   13008 start.go:360] acquireMachinesLock for ha-747000-m03: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 11:51:09.741236   13008 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-747000-m03"
	I0318 11:51:09.741765   13008 start.go:93] Provisioning new machine with config: &{Name:ha-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-747000 Namespace:default APIServerHAVIP:172.30.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.135.65 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.30.142.66 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:51:09.741949   13008 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0318 11:51:09.745525   13008 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 11:51:09.745525   13008 start.go:159] libmachine.API.Create for "ha-747000" (driver="hyperv")
	I0318 11:51:09.745525   13008 client.go:168] LocalClient.Create starting
	I0318 11:51:09.746208   13008 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0318 11:51:09.746208   13008 main.go:141] libmachine: Decoding PEM data...
	I0318 11:51:09.746208   13008 main.go:141] libmachine: Parsing certificate...
	I0318 11:51:09.746208   13008 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0318 11:51:09.747002   13008 main.go:141] libmachine: Decoding PEM data...
	I0318 11:51:09.747002   13008 main.go:141] libmachine: Parsing certificate...
	I0318 11:51:09.747002   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0318 11:51:11.565058   13008 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0318 11:51:11.565406   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:11.565514   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0318 11:51:13.265330   13008 main.go:141] libmachine: [stdout =====>] : False
	
	I0318 11:51:13.265583   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:13.265719   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:51:14.728659   13008 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:51:14.728768   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:14.728768   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:51:18.216772   13008 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:51:18.228236   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:18.230116   13008 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 11:51:18.632818   13008 main.go:141] libmachine: Creating SSH key...
	I0318 11:51:19.295001   13008 main.go:141] libmachine: Creating VM...
	I0318 11:51:19.295001   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:51:22.065289   13008 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:51:22.076345   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:22.076459   13008 main.go:141] libmachine: Using switch "Default Switch"
	I0318 11:51:22.076520   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:51:23.797013   13008 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:51:23.804574   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:23.804574   13008 main.go:141] libmachine: Creating VHD
	I0318 11:51:23.804684   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0318 11:51:27.377191   13008 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F1A1A529-D500-4573-A5EA-EF5B4F8ED67E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0318 11:51:27.388348   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:27.388348   13008 main.go:141] libmachine: Writing magic tar header
	I0318 11:51:27.388442   13008 main.go:141] libmachine: Writing SSH key tar header
	I0318 11:51:27.399364   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0318 11:51:30.454995   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:51:30.466256   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:30.466375   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\disk.vhd' -SizeBytes 20000MB
	I0318 11:51:32.946085   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:51:32.946085   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:32.956244   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-747000-m03 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0318 11:51:36.339197   13008 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-747000-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0318 11:51:36.349762   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:36.349762   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-747000-m03 -DynamicMemoryEnabled $false
	I0318 11:51:38.443462   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:51:38.453384   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:38.453384   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-747000-m03 -Count 2
	I0318 11:51:40.473649   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:51:40.484119   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:40.484119   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-747000-m03 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\boot2docker.iso'
	I0318 11:51:42.944520   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:51:42.956734   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:42.956734   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-747000-m03 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\disk.vhd'
	I0318 11:51:45.530757   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:51:45.540000   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:45.540000   13008 main.go:141] libmachine: Starting VM...
	I0318 11:51:45.540000   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-747000-m03
	I0318 11:51:48.459313   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:51:48.459313   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:48.459313   13008 main.go:141] libmachine: Waiting for host to start...
	I0318 11:51:48.459420   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:51:50.651297   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:51:50.651531   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:50.651626   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:51:53.116543   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:51:53.116543   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:54.132151   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:51:56.209332   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:51:56.209332   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:56.221722   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:51:58.676967   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:51:58.676967   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:59.677640   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:01.795433   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:01.795433   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:01.795637   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:04.196376   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:52:04.196376   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:05.203896   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:07.293353   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:07.293353   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:07.293507   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:09.762097   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:52:09.762097   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:10.771756   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:12.944306   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:12.944554   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:12.944554   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:15.372484   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:52:15.372542   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:15.372673   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:17.380617   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:17.380617   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:17.380617   13008 machine.go:94] provisionDockerMachine start ...
	I0318 11:52:17.390578   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:19.405757   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:19.405757   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:19.405757   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:21.811228   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:52:21.822287   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:21.828154   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:52:21.835255   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.111 22 <nil> <nil>}
	I0318 11:52:21.835255   13008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 11:52:21.980718   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 11:52:21.980718   13008 buildroot.go:166] provisioning hostname "ha-747000-m03"
	I0318 11:52:21.980718   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:23.980745   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:23.991985   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:23.991985   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:26.389031   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:52:26.389031   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:26.404880   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:52:26.405086   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.111 22 <nil> <nil>}
	I0318 11:52:26.405086   13008 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-747000-m03 && echo "ha-747000-m03" | sudo tee /etc/hostname
	I0318 11:52:26.561884   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-747000-m03
	
	I0318 11:52:26.561884   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:28.562982   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:28.562982   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:28.563127   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:30.946179   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:52:30.946179   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:30.951244   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:52:30.952049   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.111 22 <nil> <nil>}
	I0318 11:52:30.952049   13008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-747000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-747000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-747000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 11:52:31.098440   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 11:52:31.098440   13008 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0318 11:52:31.098977   13008 buildroot.go:174] setting up certificates
	I0318 11:52:31.098977   13008 provision.go:84] configureAuth start
	I0318 11:52:31.099050   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:33.113600   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:33.113706   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:33.113706   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:35.498593   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:52:35.509200   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:35.509261   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:37.486399   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:37.489591   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:37.489648   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:39.915103   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:52:39.926035   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:39.926035   13008 provision.go:143] copyHostCerts
	I0318 11:52:39.926134   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0318 11:52:39.926134   13008 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0318 11:52:39.926134   13008 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0318 11:52:39.926930   13008 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 11:52:39.928096   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0318 11:52:39.928311   13008 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0318 11:52:39.928311   13008 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0318 11:52:39.928311   13008 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 11:52:39.929707   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0318 11:52:39.929707   13008 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0318 11:52:39.929707   13008 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0318 11:52:39.930568   13008 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 11:52:39.931808   13008 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-747000-m03 san=[127.0.0.1 172.30.129.111 ha-747000-m03 localhost minikube]
	I0318 11:52:40.032655   13008 provision.go:177] copyRemoteCerts
	I0318 11:52:40.052272   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 11:52:40.052422   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:42.068198   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:42.068198   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:42.068198   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:44.443769   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:52:44.454482   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:44.454482   13008 sshutil.go:53] new ssh client: &{IP:172.30.129.111 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\id_rsa Username:docker}
	I0318 11:52:44.563353   13008 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5107569s)
	I0318 11:52:44.563565   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 11:52:44.563671   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 11:52:44.607181   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 11:52:44.607181   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 11:52:44.650967   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 11:52:44.651395   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 11:52:44.692104   13008 provision.go:87] duration metric: took 13.5930257s to configureAuth
	I0318 11:52:44.692104   13008 buildroot.go:189] setting minikube options for container-runtime
	I0318 11:52:44.692860   13008 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:52:44.693003   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:46.668910   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:46.669045   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:46.669045   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:49.057457   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:52:49.068991   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:49.075400   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:52:49.075604   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.111 22 <nil> <nil>}
	I0318 11:52:49.075604   13008 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 11:52:49.213300   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 11:52:49.213300   13008 buildroot.go:70] root file system type: tmpfs
	I0318 11:52:49.213610   13008 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 11:52:49.213806   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:51.196271   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:51.207107   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:51.207163   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:53.623795   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:52:53.627461   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:53.633314   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:52:53.634079   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.111 22 <nil> <nil>}
	I0318 11:52:53.634203   13008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.30.135.65"
	Environment="NO_PROXY=172.30.135.65,172.30.142.66"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 11:52:53.795227   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.30.135.65
	Environment=NO_PROXY=172.30.135.65,172.30.142.66
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 11:52:53.795227   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:55.773077   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:55.773077   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:55.783852   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:58.157321   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:52:58.157403   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:58.163442   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:52:58.163442   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.111 22 <nil> <nil>}
	I0318 11:52:58.163442   13008 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 11:53:00.239286   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 11:53:00.239340   13008 machine.go:97] duration metric: took 42.8584029s to provisionDockerMachine
	I0318 11:53:00.239367   13008 client.go:171] duration metric: took 1m50.4930149s to LocalClient.Create
	I0318 11:53:00.239446   13008 start.go:167] duration metric: took 1m50.4930939s to libmachine.API.Create "ha-747000"
	I0318 11:53:00.239616   13008 start.go:293] postStartSetup for "ha-747000-m03" (driver="hyperv")
	I0318 11:53:00.239663   13008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 11:53:00.251434   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 11:53:00.251434   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:53:02.207240   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:53:02.217885   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:02.217885   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:53:04.658769   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:53:04.658962   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:04.659127   13008 sshutil.go:53] new ssh client: &{IP:172.30.129.111 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\id_rsa Username:docker}
	I0318 11:53:04.761438   13008 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5099705s)
	I0318 11:53:04.774826   13008 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 11:53:04.781473   13008 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 11:53:04.781473   13008 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0318 11:53:04.782280   13008 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0318 11:53:04.783736   13008 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> 134242.pem in /etc/ssl/certs
	I0318 11:53:04.783736   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /etc/ssl/certs/134242.pem
	I0318 11:53:04.798308   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 11:53:04.816394   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /etc/ssl/certs/134242.pem (1708 bytes)
	I0318 11:53:04.862044   13008 start.go:296] duration metric: took 4.6223931s for postStartSetup
	I0318 11:53:04.865306   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:53:06.907424   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:53:06.907477   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:06.907477   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:53:09.298023   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:53:09.298073   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:09.298073   13008 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json ...
	I0318 11:53:09.300712   13008 start.go:128] duration metric: took 1m59.5578674s to createHost
	I0318 11:53:09.300712   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:53:11.266169   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:53:11.266169   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:11.276483   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:53:13.668227   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:53:13.668375   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:13.673619   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:53:13.674101   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.111 22 <nil> <nil>}
	I0318 11:53:13.674176   13008 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 11:53:13.816856   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710762793.819586720
	
	I0318 11:53:13.816856   13008 fix.go:216] guest clock: 1710762793.819586720
	I0318 11:53:13.816856   13008 fix.go:229] Guest: 2024-03-18 11:53:13.81958672 +0000 UTC Remote: 2024-03-18 11:53:09.3007123 +0000 UTC m=+542.420693601 (delta=4.51887442s)
	I0318 11:53:13.816856   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:53:15.840509   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:53:15.840589   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:15.840660   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:53:18.240907   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:53:18.240907   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:18.245816   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:53:18.246741   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.111 22 <nil> <nil>}
	I0318 11:53:18.246741   13008 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710762793
	I0318 11:53:18.388282   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 11:53:13 UTC 2024
	
	I0318 11:53:18.388282   13008 fix.go:236] clock set: Mon Mar 18 11:53:13 UTC 2024
	 (err=<nil>)
	I0318 11:53:18.388282   13008 start.go:83] releasing machines lock for "ha-747000-m03", held for 2m8.646083s
	I0318 11:53:18.388282   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:53:20.387927   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:53:20.399958   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:20.400016   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:53:22.837049   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:53:22.837049   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:22.840901   13008 out.go:177] * Found network options:
	I0318 11:53:22.843618   13008 out.go:177]   - NO_PROXY=172.30.135.65,172.30.142.66
	W0318 11:53:22.846083   13008 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 11:53:22.846148   13008 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 11:53:22.848854   13008 out.go:177]   - NO_PROXY=172.30.135.65,172.30.142.66
	W0318 11:53:22.851436   13008 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 11:53:22.851548   13008 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 11:53:22.852727   13008 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 11:53:22.852727   13008 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 11:53:22.854816   13008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 11:53:22.854816   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:53:22.864544   13008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 11:53:22.864544   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:53:24.997017   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:53:24.997017   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:53:24.997278   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:24.997017   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:24.997443   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:53:24.997443   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:53:27.540134   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:53:27.551154   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:27.551376   13008 sshutil.go:53] new ssh client: &{IP:172.30.129.111 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\id_rsa Username:docker}
	I0318 11:53:27.578099   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:53:27.579462   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:27.579493   13008 sshutil.go:53] new ssh client: &{IP:172.30.129.111 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\id_rsa Username:docker}
	I0318 11:53:27.658528   13008 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7939477s)
	W0318 11:53:27.658528   13008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 11:53:27.670007   13008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 11:53:27.804780   13008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 11:53:27.804780   13008 start.go:494] detecting cgroup driver to use...
	I0318 11:53:27.804780   13008 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.949843s)
	I0318 11:53:27.804844   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:53:27.847143   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 11:53:27.879191   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 11:53:27.897171   13008 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 11:53:27.911209   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 11:53:27.941754   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:53:27.972601   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 11:53:28.005444   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:53:28.036166   13008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 11:53:28.069132   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 11:53:28.101302   13008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 11:53:28.132522   13008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 11:53:28.162446   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:53:28.350841   13008 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 11:53:28.382442   13008 start.go:494] detecting cgroup driver to use...
	I0318 11:53:28.399314   13008 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 11:53:28.434511   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:53:28.468209   13008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 11:53:28.514760   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:53:28.549601   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:53:28.584048   13008 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 11:53:28.642122   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:53:28.666311   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:53:28.715407   13008 ssh_runner.go:195] Run: which cri-dockerd
	I0318 11:53:28.731981   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 11:53:28.748154   13008 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 11:53:28.791106   13008 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 11:53:28.983435   13008 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 11:53:29.149435   13008 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 11:53:29.149435   13008 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 11:53:29.191365   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:53:29.387451   13008 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 11:53:31.878051   13008 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4905816s)
	I0318 11:53:31.889810   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 11:53:31.924945   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:53:31.959564   13008 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 11:53:32.143097   13008 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 11:53:32.324791   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:53:32.502844   13008 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 11:53:32.541915   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:53:32.577694   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:53:32.770383   13008 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 11:53:32.869973   13008 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 11:53:32.882191   13008 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 11:53:32.889954   13008 start.go:562] Will wait 60s for crictl version
	I0318 11:53:32.901493   13008 ssh_runner.go:195] Run: which crictl
	I0318 11:53:32.919237   13008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 11:53:32.990422   13008 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 11:53:33.001399   13008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:53:33.040941   13008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:53:33.078865   13008 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 11:53:33.085280   13008 out.go:177]   - env NO_PROXY=172.30.135.65
	I0318 11:53:33.088523   13008 out.go:177]   - env NO_PROXY=172.30.135.65,172.30.142.66
	I0318 11:53:33.090716   13008 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 11:53:33.095911   13008 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 11:53:33.095911   13008 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 11:53:33.095911   13008 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 11:53:33.095911   13008 ip.go:207] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:93:df:b5 Flags:up|broadcast|multicast|running}
	I0318 11:53:33.099560   13008 ip.go:210] interface addr: fe80::572b:6b1d:9130:e88b/64
	I0318 11:53:33.099560   13008 ip.go:210] interface addr: 172.30.128.1/20
	I0318 11:53:33.110573   13008 ssh_runner.go:195] Run: grep 172.30.128.1	host.minikube.internal$ /etc/hosts
	I0318 11:53:33.117142   13008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:53:33.135644   13008 mustload.go:65] Loading cluster: ha-747000
	I0318 11:53:33.136250   13008 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:53:33.136589   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:53:35.123341   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:53:35.123341   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:35.123341   13008 host.go:66] Checking if "ha-747000" exists ...
	I0318 11:53:35.134481   13008 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000 for IP: 172.30.129.111
	I0318 11:53:35.134481   13008 certs.go:194] generating shared ca certs ...
	I0318 11:53:35.134481   13008 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:53:35.135633   13008 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0318 11:53:35.135731   13008 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0318 11:53:35.135731   13008 certs.go:256] generating profile certs ...
	I0318 11:53:35.136790   13008 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\client.key
	I0318 11:53:35.137043   13008 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.ebeb1e18
	I0318 11:53:35.137201   13008 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.ebeb1e18 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.30.135.65 172.30.142.66 172.30.129.111 172.30.143.254]
	I0318 11:53:35.258911   13008 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.ebeb1e18 ...
	I0318 11:53:35.258911   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.ebeb1e18: {Name:mk52d5b7a21083f1d5e9cc742d102dd7f89d63c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:53:35.263882   13008 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.ebeb1e18 ...
	I0318 11:53:35.263882   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.ebeb1e18: {Name:mk807ffcb99004c8a1197c4bcac2e074e702d374 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:53:35.265155   13008 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.ebeb1e18 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt
	I0318 11:53:35.269044   13008 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.ebeb1e18 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key
	I0318 11:53:35.278381   13008 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key
	I0318 11:53:35.278381   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 11:53:35.279542   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 11:53:35.279759   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 11:53:35.279759   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 11:53:35.279759   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 11:53:35.280287   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 11:53:35.280603   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 11:53:35.280696   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 11:53:35.281432   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem (1338 bytes)
	W0318 11:53:35.281704   13008 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424_empty.pem, impossibly tiny 0 bytes
	I0318 11:53:35.281928   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0318 11:53:35.282213   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 11:53:35.282453   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 11:53:35.282587   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 11:53:35.282587   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem (1708 bytes)
	I0318 11:53:35.283271   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /usr/share/ca-certificates/134242.pem
	I0318 11:53:35.283408   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:53:35.283491   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem -> /usr/share/ca-certificates/13424.pem
	I0318 11:53:35.283802   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:53:37.281582   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:53:37.281582   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:37.281695   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:53:39.700207   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:53:39.700207   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:39.711717   13008 sshutil.go:53] new ssh client: &{IP:172.30.135.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\id_rsa Username:docker}
	I0318 11:53:39.811773   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0318 11:53:39.818854   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0318 11:53:39.852019   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0318 11:53:39.859467   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0318 11:53:39.888892   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0318 11:53:39.895195   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0318 11:53:39.923719   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0318 11:53:39.931098   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0318 11:53:39.965720   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0318 11:53:39.973166   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0318 11:53:40.004032   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0318 11:53:40.010159   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0318 11:53:40.029033   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 11:53:40.074090   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0318 11:53:40.119652   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 11:53:40.167186   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 11:53:40.212478   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0318 11:53:40.254743   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 11:53:40.296024   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 11:53:40.338317   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 11:53:40.382558   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /usr/share/ca-certificates/134242.pem (1708 bytes)
	I0318 11:53:40.423049   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 11:53:40.462810   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem --> /usr/share/ca-certificates/13424.pem (1338 bytes)
	I0318 11:53:40.503040   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0318 11:53:40.531746   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0318 11:53:40.556686   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0318 11:53:40.595126   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0318 11:53:40.624635   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0318 11:53:40.652628   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0318 11:53:40.683323   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0318 11:53:40.723736   13008 ssh_runner.go:195] Run: openssl version
	I0318 11:53:40.743604   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 11:53:40.773281   13008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:53:40.779866   13008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 11:07 /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:53:40.792465   13008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:53:40.812306   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 11:53:40.841284   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13424.pem && ln -fs /usr/share/ca-certificates/13424.pem /etc/ssl/certs/13424.pem"
	I0318 11:53:40.871415   13008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13424.pem
	I0318 11:53:40.878613   13008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 11:25 /usr/share/ca-certificates/13424.pem
	I0318 11:53:40.892455   13008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13424.pem
	I0318 11:53:40.911639   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13424.pem /etc/ssl/certs/51391683.0"
	I0318 11:53:40.942757   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134242.pem && ln -fs /usr/share/ca-certificates/134242.pem /etc/ssl/certs/134242.pem"
	I0318 11:53:40.973725   13008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134242.pem
	I0318 11:53:40.976524   13008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 11:25 /usr/share/ca-certificates/134242.pem
	I0318 11:53:40.991869   13008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134242.pem
	I0318 11:53:41.011462   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134242.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 11:53:41.043154   13008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 11:53:41.048678   13008 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 11:53:41.049092   13008 kubeadm.go:928] updating node {m03 172.30.129.111 8443 v1.28.4 docker true true} ...
	I0318 11:53:41.049324   13008 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-747000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.129.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-747000 Namespace:default APIServerHAVIP:172.30.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 11:53:41.049415   13008 kube-vip.go:111] generating kube-vip config ...
	I0318 11:53:41.059575   13008 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 11:53:41.083984   13008 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 11:53:41.084717   13008 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.30.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 11:53:41.097118   13008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 11:53:41.112801   13008 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 11:53:41.125170   13008 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 11:53:41.141666   13008 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0318 11:53:41.141666   13008 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0318 11:53:41.141666   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 11:53:41.141666   13008 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0318 11:53:41.142372   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 11:53:41.158484   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 11:53:41.159097   13008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 11:53:41.159097   13008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 11:53:41.177832   13008 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 11:53:41.177832   13008 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 11:53:41.177832   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 11:53:41.179459   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 11:53:41.179575   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 11:53:41.191996   13008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 11:53:41.245087   13008 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 11:53:41.245147   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 11:53:42.524669   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0318 11:53:42.541555   13008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0318 11:53:42.572244   13008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 11:53:42.600545   13008 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 11:53:42.640394   13008 ssh_runner.go:195] Run: grep 172.30.143.254	control-plane.minikube.internal$ /etc/hosts
	I0318 11:53:42.648279   13008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:53:42.679961   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:53:42.876197   13008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:53:42.908285   13008 host.go:66] Checking if "ha-747000" exists ...
	I0318 11:53:42.908446   13008 start.go:316] joinCluster: &{Name:ha-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-747000 Namespace:default APIServerHAVIP:172.30.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.135.65 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.30.142.66 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.30.129.111 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:53:42.909143   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 11:53:42.909255   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:53:44.918478   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:53:44.918658   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:44.918749   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:53:47.325480   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:53:47.325480   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:47.335787   13008 sshutil.go:53] new ssh client: &{IP:172.30.135.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\id_rsa Username:docker}
	I0318 11:53:47.530228   13008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.6209417s)
	I0318 11:53:47.530341   13008 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.30.129.111 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:53:47.530403   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aen08i.8flpo1ytk7tnbiuh --discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-747000-m03 --control-plane --apiserver-advertise-address=172.30.129.111 --apiserver-bind-port=8443"
	I0318 11:54:31.007118   13008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aen08i.8flpo1ytk7tnbiuh --discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-747000-m03 --control-plane --apiserver-advertise-address=172.30.129.111 --apiserver-bind-port=8443": (43.4763885s)
	I0318 11:54:31.007118   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 11:54:31.765063   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-747000-m03 minikube.k8s.io/updated_at=2024_03_18T11_54_31_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=ha-747000 minikube.k8s.io/primary=false
	I0318 11:54:31.930820   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-747000-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0318 11:54:32.123067   13008 start.go:318] duration metric: took 49.2142517s to joinCluster
	I0318 11:54:32.123130   13008 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.30.129.111 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:54:32.125981   13008 out.go:177] * Verifying Kubernetes components...
	I0318 11:54:32.124017   13008 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:54:32.141976   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:54:32.473595   13008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:54:32.497210   13008 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 11:54:32.497889   13008 kapi.go:59] client config for ha-747000: &rest.Config{Host:"https://172.30.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-747000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-747000\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e8b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0318 11:54:32.497997   13008 kubeadm.go:477] Overriding stale ClientConfig host https://172.30.143.254:8443 with https://172.30.135.65:8443
	I0318 11:54:32.498642   13008 node_ready.go:35] waiting up to 6m0s for node "ha-747000-m03" to be "Ready" ...
	I0318 11:54:32.498869   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:32.498869   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:32.498927   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:32.498927   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:32.514135   13008 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0318 11:54:33.011606   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:33.011606   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:33.011606   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:33.011606   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:33.017139   13008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:54:33.503217   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:33.503217   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:33.503217   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:33.503217   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:33.509209   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:54:34.007071   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:34.007071   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:34.007071   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:34.007331   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:34.015287   13008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:54:34.513214   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:34.513214   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:34.513214   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:34.513214   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:34.523353   13008 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 11:54:34.523931   13008 node_ready.go:53] node "ha-747000-m03" has status "Ready":"False"
	I0318 11:54:35.003240   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:35.003270   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:35.003270   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:35.003270   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:35.007225   13008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:54:35.511195   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:35.511195   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:35.511195   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:35.511195   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:35.513976   13008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:54:36.011131   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:36.011131   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:36.011131   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:36.011220   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:36.016886   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:36.505219   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:36.505219   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:36.505219   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:36.505219   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:36.505808   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:37.013822   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:37.013822   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:37.013822   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:37.013822   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:37.016052   13008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:54:37.023175   13008 node_ready.go:53] node "ha-747000-m03" has status "Ready":"False"
	I0318 11:54:37.514949   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:37.514949   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:37.514949   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:37.514949   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:37.532465   13008 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0318 11:54:38.018225   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:38.018290   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:38.018322   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:38.018322   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:38.023731   13008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:54:38.509434   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:38.509434   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:38.509434   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:38.509434   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:38.516601   13008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:54:39.013096   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:39.013096   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:39.013096   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:39.013096   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:39.020046   13008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:54:39.499191   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:39.499191   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:39.499191   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:39.499191   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:39.502380   13008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:54:39.504621   13008 node_ready.go:53] node "ha-747000-m03" has status "Ready":"False"
	I0318 11:54:40.003665   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:40.003665   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:40.003665   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:40.003665   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:40.009064   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:54:40.500415   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:40.500591   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:40.500591   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:40.500643   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:40.510756   13008 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 11:54:41.003216   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:41.003216   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:41.003435   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:41.003435   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:41.004143   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:41.522919   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:41.522919   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:41.522919   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:41.522919   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:41.523519   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:41.528435   13008 node_ready.go:53] node "ha-747000-m03" has status "Ready":"False"
	I0318 11:54:42.017612   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:42.017869   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:42.017869   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:42.017869   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:42.022079   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:54:42.515495   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:42.515495   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:42.515495   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:42.515495   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:42.552471   13008 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I0318 11:54:43.020710   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:43.020710   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.020868   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.021018   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.026753   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:54:43.027757   13008 node_ready.go:49] node "ha-747000-m03" has status "Ready":"True"
	I0318 11:54:43.027757   13008 node_ready.go:38] duration metric: took 10.528975s for node "ha-747000-m03" to be "Ready" ...
	I0318 11:54:43.027757   13008 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:54:43.028606   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods
	I0318 11:54:43.028606   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.028606   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.028606   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.033857   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:54:43.046839   13008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dhl7r" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:43.046839   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dhl7r
	I0318 11:54:43.046839   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.046839   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.046839   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.051163   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:54:43.052204   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:43.052204   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.052204   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.052204   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.052480   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:43.056603   13008 pod_ready.go:92] pod "coredns-5dd5756b68-dhl7r" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:43.056603   13008 pod_ready.go:81] duration metric: took 9.7633ms for pod "coredns-5dd5756b68-dhl7r" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:43.056603   13008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gm84s" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:43.057138   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-gm84s
	I0318 11:54:43.057138   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.057138   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.057138   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.057420   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:43.062335   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:43.062335   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.062335   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.062335   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.066952   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:54:43.067939   13008 pod_ready.go:92] pod "coredns-5dd5756b68-gm84s" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:43.067939   13008 pod_ready.go:81] duration metric: took 11.3361ms for pod "coredns-5dd5756b68-gm84s" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:43.067939   13008 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:43.067939   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-747000
	I0318 11:54:43.067939   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.067939   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.067939   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.071885   13008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:54:43.072699   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:43.072784   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.072784   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.072784   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.090799   13008 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0318 11:54:43.091913   13008 pod_ready.go:92] pod "etcd-ha-747000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:43.091913   13008 pod_ready.go:81] duration metric: took 23.9742ms for pod "etcd-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:43.091913   13008 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:43.091913   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-747000-m02
	I0318 11:54:43.091913   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.091913   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.091913   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.092554   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:43.097183   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:54:43.097247   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.097247   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.097247   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.097501   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:43.102040   13008 pod_ready.go:92] pod "etcd-ha-747000-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:43.102149   13008 pod_ready.go:81] duration metric: took 10.1749ms for pod "etcd-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:43.102149   13008 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-747000-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:43.231312   13008 request.go:629] Waited for 129.1625ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-747000-m03
	I0318 11:54:43.231566   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-747000-m03
	I0318 11:54:43.231654   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.231654   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.231691   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.232416   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:43.431865   13008 request.go:629] Waited for 194.4979ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:43.432173   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:43.432173   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.432173   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.432173   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.432913   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:43.624336   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-747000-m03
	I0318 11:54:43.624510   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.624543   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.624573   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.636373   13008 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 11:54:43.830872   13008 request.go:629] Waited for 193.5891ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:43.831130   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:43.831130   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.831130   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.831130   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.831870   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:44.103679   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-747000-m03
	I0318 11:54:44.103679   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:44.103679   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:44.103679   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:44.103998   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:44.231286   13008 request.go:629] Waited for 121.1658ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:44.231469   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:44.231469   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:44.231469   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:44.231469   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:44.235214   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:44.611004   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-747000-m03
	I0318 11:54:44.611004   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:44.611004   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:44.611004   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:44.611747   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:44.633673   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:44.633673   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:44.633673   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:44.633673   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:44.634406   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:45.114017   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-747000-m03
	I0318 11:54:45.114017   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:45.114017   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:45.114017   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:45.114771   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:45.120782   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:45.120782   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:45.120782   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:45.120782   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:45.125619   13008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:54:45.126420   13008 pod_ready.go:92] pod "etcd-ha-747000-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:45.126509   13008 pod_ready.go:81] duration metric: took 2.0243458s for pod "etcd-ha-747000-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:45.126583   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:45.225424   13008 request.go:629] Waited for 98.785ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000
	I0318 11:54:45.225598   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000
	I0318 11:54:45.225666   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:45.225695   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:45.225695   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:45.226475   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:45.428979   13008 request.go:629] Waited for 197.166ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:45.429049   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:45.429049   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:45.429049   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:45.429049   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:45.429689   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:45.434872   13008 pod_ready.go:92] pod "kube-apiserver-ha-747000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:45.434872   13008 pod_ready.go:81] duration metric: took 308.2866ms for pod "kube-apiserver-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:45.434872   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:45.625016   13008 request.go:629] Waited for 189.9507ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m02
	I0318 11:54:45.625111   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m02
	I0318 11:54:45.625111   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:45.625111   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:45.625111   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:45.630614   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:45.829238   13008 request.go:629] Waited for 197.4448ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:54:45.829496   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:54:45.829547   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:45.829581   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:45.829581   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:45.837199   13008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:54:45.837736   13008 pod_ready.go:92] pod "kube-apiserver-ha-747000-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:45.838274   13008 pod_ready.go:81] duration metric: took 402.8606ms for pod "kube-apiserver-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:45.838274   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-747000-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:46.027608   13008 request.go:629] Waited for 189.1438ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m03
	I0318 11:54:46.027698   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m03
	I0318 11:54:46.027805   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:46.027805   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:46.027805   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:46.028052   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:46.241720   13008 request.go:629] Waited for 207.8005ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:46.241720   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:46.242006   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:46.242063   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:46.242130   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:46.242880   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:46.427374   13008 request.go:629] Waited for 71.2914ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m03
	I0318 11:54:46.427374   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m03
	I0318 11:54:46.427615   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:46.427615   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:46.427678   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:46.432158   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:54:46.635779   13008 request.go:629] Waited for 200.8677ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:46.635894   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:46.635894   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:46.635894   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:46.635894   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:46.636212   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:46.839124   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m03
	I0318 11:54:46.839124   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:46.839124   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:46.839124   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:46.840049   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:47.027634   13008 request.go:629] Waited for 182.1083ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:47.027994   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:47.027994   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:47.027994   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:47.027994   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:47.037154   13008 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 11:54:47.339437   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m03
	I0318 11:54:47.339539   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:47.339539   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:47.339602   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:47.339797   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:47.429991   13008 request.go:629] Waited for 85.0472ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:47.430069   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:47.430124   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:47.430150   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:47.430150   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:47.430337   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:47.851381   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m03
	I0318 11:54:47.851381   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:47.851482   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:47.851482   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:47.851700   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:47.857375   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:47.857375   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:47.857375   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:47.857375   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:47.861983   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:54:47.862886   13008 pod_ready.go:92] pod "kube-apiserver-ha-747000-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:47.862886   13008 pod_ready.go:81] duration metric: took 2.024597s for pod "kube-apiserver-ha-747000-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:47.862886   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:48.024553   13008 request.go:629] Waited for 161.4598ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-747000
	I0318 11:54:48.024636   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-747000
	I0318 11:54:48.024636   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:48.024636   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:48.024636   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:48.029802   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:54:48.234550   13008 request.go:629] Waited for 203.7059ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:48.234733   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:48.234733   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:48.234810   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:48.234810   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:48.235491   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:48.246614   13008 pod_ready.go:92] pod "kube-controller-manager-ha-747000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:48.246674   13008 pod_ready.go:81] duration metric: took 383.7846ms for pod "kube-controller-manager-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:48.246747   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:48.430554   13008 request.go:629] Waited for 183.6822ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-747000-m02
	I0318 11:54:48.430776   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-747000-m02
	I0318 11:54:48.430776   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:48.430914   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:48.430914   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:48.431764   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:48.634914   13008 request.go:629] Waited for 196.8372ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:54:48.635316   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:54:48.635316   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:48.635366   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:48.635366   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:48.635561   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:48.640674   13008 pod_ready.go:92] pod "kube-controller-manager-ha-747000-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:48.640674   13008 pod_ready.go:81] duration metric: took 393.8819ms for pod "kube-controller-manager-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:48.640674   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-747000-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:48.828434   13008 request.go:629] Waited for 187.5478ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-747000-m03
	I0318 11:54:48.828646   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-747000-m03
	I0318 11:54:48.828717   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:48.828717   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:48.828836   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:48.829146   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:49.022530   13008 request.go:629] Waited for 186.0354ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:49.022613   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:49.022613   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:49.022613   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:49.022613   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:49.029435   13008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:54:49.029435   13008 pod_ready.go:92] pod "kube-controller-manager-ha-747000-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:49.030479   13008 pod_ready.go:81] duration metric: took 389.802ms for pod "kube-controller-manager-ha-747000-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:49.030479   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lp986" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:49.229184   13008 request.go:629] Waited for 198.5176ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lp986
	I0318 11:54:49.229388   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lp986
	I0318 11:54:49.229388   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:49.229388   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:49.229388   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:49.235671   13008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:54:49.428316   13008 request.go:629] Waited for 191.5773ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:49.428497   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:49.428612   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:49.428612   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:49.428612   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:49.429247   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:49.434530   13008 pod_ready.go:92] pod "kube-proxy-lp986" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:49.434530   13008 pod_ready.go:81] duration metric: took 404.0483ms for pod "kube-proxy-lp986" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:49.434609   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-njpzx" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:49.623221   13008 request.go:629] Waited for 188.3488ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-njpzx
	I0318 11:54:49.623410   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-njpzx
	I0318 11:54:49.623410   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:49.623410   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:49.623410   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:49.628534   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:54:49.833923   13008 request.go:629] Waited for 203.2399ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:49.834091   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:49.834243   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:49.834243   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:49.834243   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:49.834455   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:49.839013   13008 pod_ready.go:92] pod "kube-proxy-njpzx" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:49.839013   13008 pod_ready.go:81] duration metric: took 404.4015ms for pod "kube-proxy-njpzx" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:49.839552   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zzg5q" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:50.028405   13008 request.go:629] Waited for 188.5068ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zzg5q
	I0318 11:54:50.028510   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zzg5q
	I0318 11:54:50.028510   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:50.028510   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:50.028510   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:50.029035   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:50.226949   13008 request.go:629] Waited for 191.8899ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:54:50.227463   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:54:50.227463   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:50.227463   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:50.227463   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:50.233480   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:54:50.234253   13008 pod_ready.go:92] pod "kube-proxy-zzg5q" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:50.234253   13008 pod_ready.go:81] duration metric: took 394.6973ms for pod "kube-proxy-zzg5q" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:50.234253   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:50.429273   13008 request.go:629] Waited for 195.0194ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-747000
	I0318 11:54:50.429497   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-747000
	I0318 11:54:50.429497   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:50.429497   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:50.429571   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:50.434623   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:54:50.633159   13008 request.go:629] Waited for 197.8852ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:50.633159   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:50.633159   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:50.633159   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:50.633159   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:50.635090   13008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:54:50.639392   13008 pod_ready.go:92] pod "kube-scheduler-ha-747000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:50.639460   13008 pod_ready.go:81] duration metric: took 405.204ms for pod "kube-scheduler-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:50.639460   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:50.832363   13008 request.go:629] Waited for 192.4963ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-747000-m02
	I0318 11:54:50.832592   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-747000-m02
	I0318 11:54:50.832630   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:50.832630   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:50.832666   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:50.835450   13008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:54:51.028759   13008 request.go:629] Waited for 190.9432ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:54:51.029051   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:54:51.029159   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:51.029159   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:51.029159   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:51.029915   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:51.035684   13008 pod_ready.go:92] pod "kube-scheduler-ha-747000-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:51.035737   13008 pod_ready.go:81] duration metric: took 396.2748ms for pod "kube-scheduler-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:51.035737   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-747000-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:51.232476   13008 request.go:629] Waited for 196.6628ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-747000-m03
	I0318 11:54:51.232861   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-747000-m03
	I0318 11:54:51.232861   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:51.232861   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:51.232861   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:51.233602   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:51.422228   13008 request.go:629] Waited for 182.7479ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:51.422367   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:51.422367   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:51.422418   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:51.422418   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:51.429432   13008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:54:51.429954   13008 pod_ready.go:92] pod "kube-scheduler-ha-747000-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:51.429954   13008 pod_ready.go:81] duration metric: took 394.2142ms for pod "kube-scheduler-ha-747000-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:51.429954   13008 pod_ready.go:38] duration metric: took 8.402135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:54:51.429954   13008 api_server.go:52] waiting for apiserver process to appear ...
	I0318 11:54:51.442874   13008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 11:54:51.469704   13008 api_server.go:72] duration metric: took 19.3464309s to wait for apiserver process to appear ...
	I0318 11:54:51.469704   13008 api_server.go:88] waiting for apiserver healthz status ...
	I0318 11:54:51.469849   13008 api_server.go:253] Checking apiserver healthz at https://172.30.135.65:8443/healthz ...
	I0318 11:54:51.477537   13008 api_server.go:279] https://172.30.135.65:8443/healthz returned 200:
	ok
	I0318 11:54:51.480560   13008 round_trippers.go:463] GET https://172.30.135.65:8443/version
	I0318 11:54:51.480560   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:51.480560   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:51.480560   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:51.481253   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:51.482581   13008 api_server.go:141] control plane version: v1.28.4
	I0318 11:54:51.482617   13008 api_server.go:131] duration metric: took 12.7676ms to wait for apiserver health ...
	I0318 11:54:51.482664   13008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 11:54:51.632100   13008 request.go:629] Waited for 148.9934ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods
	I0318 11:54:51.632184   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods
	I0318 11:54:51.632184   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:51.632184   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:51.632184   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:51.637067   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:54:51.651102   13008 system_pods.go:59] 24 kube-system pods found
	I0318 11:54:51.651102   13008 system_pods.go:61] "coredns-5dd5756b68-dhl7r" [26fc5ceb-a1e3-46f5-b148-40f5312b8628] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "coredns-5dd5756b68-gm84s" [227e2616-d98e-4ca1-aa66-7e2ed6096d33] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "etcd-ha-747000" [ac7df0ab-4112-40ce-9646-08b806025178] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "etcd-ha-747000-m02" [0fb29c85-0d84-4aa9-8f40-041b128f3b62] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "etcd-ha-747000-m03" [71309fb0-67e2-4098-a267-e1677a6b6a5b] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kindnet-82v6x" [7f008b78-2eb1-434c-9867-ccd5216a7ed5] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kindnet-czdhw" [b8b10f3a-4cc3-43d0-a192-438de10116b1] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kindnet-zt7pd" [76ca473c-8611-410d-9f13-52d1bfcec7f6] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-apiserver-ha-747000" [acfdbb0b-15ce-407b-a8c9-bab5477bb1ae] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-apiserver-ha-747000-m02" [f3b62c80-17e7-435a-9ffd-a2b727a702cd] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-apiserver-ha-747000-m03" [88bb5439-b862-4458-b098-b91a6ea2b487] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-controller-manager-ha-747000" [7a40710c-8acc-4457-93ba-a31a4073be21] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-controller-manager-ha-747000-m02" [e191ed3e-7566-4f60-baa0-957f4d7522cc] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-controller-manager-ha-747000-m03" [07072b27-c1da-40fb-9391-90b7f1513dc8] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-proxy-lp986" [8567319d-191a-47b4-a5b6-f615907650d3] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-proxy-njpzx" [7787e24a-9c16-408d-9c81-c8d15d5d66d1] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-proxy-zzg5q" [c9403c4f-57d4-4a1c-b629-b8c3f53db9c9] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-scheduler-ha-747000" [fbfffede-3da2-45f0-abbb-28a58072068b] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-scheduler-ha-747000-m02" [cdc9ff95-abdf-4913-9e30-dbc8f2785f60] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-scheduler-ha-747000-m03" [bececf64-6a59-4c26-b1d4-98f6c4fe1cdc] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-vip-ha-747000" [3e440bfd-5dc2-4981-a26d-c0ae17c250d5] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-vip-ha-747000-m02" [992c820a-4b58-4baa-b6e4-f0c19a65ad85] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-vip-ha-747000-m03" [e8f91767-6cfb-4de1-a362-c29c4560f9d1] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "storage-provisioner" [de0ac48e-e3cb-430d-8dc6-6e52d350a38e] Running
	I0318 11:54:51.651102   13008 system_pods.go:74] duration metric: took 168.437ms to wait for pod list to return data ...
	I0318 11:54:51.651102   13008 default_sa.go:34] waiting for default service account to be created ...
	I0318 11:54:51.836017   13008 request.go:629] Waited for 184.6178ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/default/serviceaccounts
	I0318 11:54:51.836017   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/default/serviceaccounts
	I0318 11:54:51.836017   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:51.836017   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:51.836017   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:51.836652   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:51.841506   13008 default_sa.go:45] found service account: "default"
	I0318 11:54:51.841506   13008 default_sa.go:55] duration metric: took 190.4028ms for default service account to be created ...
	I0318 11:54:51.841506   13008 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 11:54:52.029994   13008 request.go:629] Waited for 188.4861ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods
	I0318 11:54:52.030136   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods
	I0318 11:54:52.030136   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:52.030136   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:52.030136   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:52.038250   13008 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 11:54:52.052253   13008 system_pods.go:86] 24 kube-system pods found
	I0318 11:54:52.052253   13008 system_pods.go:89] "coredns-5dd5756b68-dhl7r" [26fc5ceb-a1e3-46f5-b148-40f5312b8628] Running
	I0318 11:54:52.052253   13008 system_pods.go:89] "coredns-5dd5756b68-gm84s" [227e2616-d98e-4ca1-aa66-7e2ed6096d33] Running
	I0318 11:54:52.052253   13008 system_pods.go:89] "etcd-ha-747000" [ac7df0ab-4112-40ce-9646-08b806025178] Running
	I0318 11:54:52.052253   13008 system_pods.go:89] "etcd-ha-747000-m02" [0fb29c85-0d84-4aa9-8f40-041b128f3b62] Running
	I0318 11:54:52.052253   13008 system_pods.go:89] "etcd-ha-747000-m03" [71309fb0-67e2-4098-a267-e1677a6b6a5b] Running
	I0318 11:54:52.052253   13008 system_pods.go:89] "kindnet-82v6x" [7f008b78-2eb1-434c-9867-ccd5216a7ed5] Running
	I0318 11:54:52.052253   13008 system_pods.go:89] "kindnet-czdhw" [b8b10f3a-4cc3-43d0-a192-438de10116b1] Running
	I0318 11:54:52.052781   13008 system_pods.go:89] "kindnet-zt7pd" [76ca473c-8611-410d-9f13-52d1bfcec7f6] Running
	I0318 11:54:52.052980   13008 system_pods.go:89] "kube-apiserver-ha-747000" [acfdbb0b-15ce-407b-a8c9-bab5477bb1ae] Running
	I0318 11:54:52.053790   13008 system_pods.go:89] "kube-apiserver-ha-747000-m02" [f3b62c80-17e7-435a-9ffd-a2b727a702cd] Running
	I0318 11:54:52.053790   13008 system_pods.go:89] "kube-apiserver-ha-747000-m03" [88bb5439-b862-4458-b098-b91a6ea2b487] Running
	I0318 11:54:52.053790   13008 system_pods.go:89] "kube-controller-manager-ha-747000" [7a40710c-8acc-4457-93ba-a31a4073be21] Running
	I0318 11:54:52.053790   13008 system_pods.go:89] "kube-controller-manager-ha-747000-m02" [e191ed3e-7566-4f60-baa0-957f4d7522cc] Running
	I0318 11:54:52.053790   13008 system_pods.go:89] "kube-controller-manager-ha-747000-m03" [07072b27-c1da-40fb-9391-90b7f1513dc8] Running
	I0318 11:54:52.053790   13008 system_pods.go:89] "kube-proxy-lp986" [8567319d-191a-47b4-a5b6-f615907650d3] Running
	I0318 11:54:52.053790   13008 system_pods.go:89] "kube-proxy-njpzx" [7787e24a-9c16-408d-9c81-c8d15d5d66d1] Running
	I0318 11:54:52.053936   13008 system_pods.go:89] "kube-proxy-zzg5q" [c9403c4f-57d4-4a1c-b629-b8c3f53db9c9] Running
	I0318 11:54:52.053936   13008 system_pods.go:89] "kube-scheduler-ha-747000" [fbfffede-3da2-45f0-abbb-28a58072068b] Running
	I0318 11:54:52.053936   13008 system_pods.go:89] "kube-scheduler-ha-747000-m02" [cdc9ff95-abdf-4913-9e30-dbc8f2785f60] Running
	I0318 11:54:52.053936   13008 system_pods.go:89] "kube-scheduler-ha-747000-m03" [bececf64-6a59-4c26-b1d4-98f6c4fe1cdc] Running
	I0318 11:54:52.053936   13008 system_pods.go:89] "kube-vip-ha-747000" [3e440bfd-5dc2-4981-a26d-c0ae17c250d5] Running
	I0318 11:54:52.053936   13008 system_pods.go:89] "kube-vip-ha-747000-m02" [992c820a-4b58-4baa-b6e4-f0c19a65ad85] Running
	I0318 11:54:52.053936   13008 system_pods.go:89] "kube-vip-ha-747000-m03" [e8f91767-6cfb-4de1-a362-c29c4560f9d1] Running
	I0318 11:54:52.053936   13008 system_pods.go:89] "storage-provisioner" [de0ac48e-e3cb-430d-8dc6-6e52d350a38e] Running
	I0318 11:54:52.053936   13008 system_pods.go:126] duration metric: took 212.4278ms to wait for k8s-apps to be running ...
	I0318 11:54:52.053936   13008 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 11:54:52.062979   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 11:54:52.090649   13008 system_svc.go:56] duration metric: took 36.7133ms WaitForService to wait for kubelet
	I0318 11:54:52.090649   13008 kubeadm.go:576] duration metric: took 19.9673718s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 11:54:52.090649   13008 node_conditions.go:102] verifying NodePressure condition ...
	I0318 11:54:52.236107   13008 request.go:629] Waited for 145.1601ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes
	I0318 11:54:52.236107   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes
	I0318 11:54:52.236107   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:52.236107   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:52.236107   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:52.242653   13008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:54:52.244307   13008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:54:52.244385   13008 node_conditions.go:123] node cpu capacity is 2
	I0318 11:54:52.244385   13008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:54:52.244385   13008 node_conditions.go:123] node cpu capacity is 2
	I0318 11:54:52.244385   13008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:54:52.244385   13008 node_conditions.go:123] node cpu capacity is 2
	I0318 11:54:52.244385   13008 node_conditions.go:105] duration metric: took 153.7341ms to run NodePressure ...
	I0318 11:54:52.244385   13008 start.go:240] waiting for startup goroutines ...
	I0318 11:54:52.244492   13008 start.go:254] writing updated cluster config ...
	I0318 11:54:52.254598   13008 ssh_runner.go:195] Run: rm -f paused
	I0318 11:54:52.396312   13008 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 11:54:52.401836   13008 out.go:177] * Done! kubectl is now configured to use "ha-747000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 18 11:50:45 ha-747000 dockerd[1337]: time="2024-03-18T11:50:45.454359498Z" level=info msg="shim disconnected" id=18bee778eecc25ec10007cdcdcdb29e4496df2af61bd2c19c79abfc43a8dbab2 namespace=moby
	Mar 18 11:50:45 ha-747000 dockerd[1337]: time="2024-03-18T11:50:45.454447798Z" level=warning msg="cleaning up after shim disconnected" id=18bee778eecc25ec10007cdcdcdb29e4496df2af61bd2c19c79abfc43a8dbab2 namespace=moby
	Mar 18 11:50:45 ha-747000 dockerd[1337]: time="2024-03-18T11:50:45.454478098Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 11:50:45 ha-747000 dockerd[1331]: time="2024-03-18T11:50:45.494802865Z" level=info msg="ignoring event" container=7dcbde4d0d9af9fc89f21a596c616e25b1e68b4ad873d778afd7239239691e53 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 18 11:50:45 ha-747000 dockerd[1337]: time="2024-03-18T11:50:45.502073577Z" level=info msg="shim disconnected" id=7dcbde4d0d9af9fc89f21a596c616e25b1e68b4ad873d778afd7239239691e53 namespace=moby
	Mar 18 11:50:45 ha-747000 dockerd[1337]: time="2024-03-18T11:50:45.502148377Z" level=warning msg="cleaning up after shim disconnected" id=7dcbde4d0d9af9fc89f21a596c616e25b1e68b4ad873d778afd7239239691e53 namespace=moby
	Mar 18 11:50:45 ha-747000 dockerd[1337]: time="2024-03-18T11:50:45.502160177Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 18 11:50:46 ha-747000 dockerd[1337]: time="2024-03-18T11:50:46.546426809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:50:46 ha-747000 dockerd[1337]: time="2024-03-18T11:50:46.547127411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:50:46 ha-747000 dockerd[1337]: time="2024-03-18T11:50:46.547285511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:50:46 ha-747000 dockerd[1337]: time="2024-03-18T11:50:46.547709111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:50:46 ha-747000 dockerd[1337]: time="2024-03-18T11:50:46.579932965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:50:46 ha-747000 dockerd[1337]: time="2024-03-18T11:50:46.580076865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:50:46 ha-747000 dockerd[1337]: time="2024-03-18T11:50:46.580244566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:50:46 ha-747000 dockerd[1337]: time="2024-03-18T11:50:46.580455766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:55:30 ha-747000 dockerd[1337]: time="2024-03-18T11:55:30.020047218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:55:30 ha-747000 dockerd[1337]: time="2024-03-18T11:55:30.020121919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:55:30 ha-747000 dockerd[1337]: time="2024-03-18T11:55:30.020139019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:55:30 ha-747000 dockerd[1337]: time="2024-03-18T11:55:30.020247021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:55:30 ha-747000 cri-dockerd[1225]: time="2024-03-18T11:55:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/280c7276a55904857afc7a24b65f72bde51b05280c8d6a2201ccac07f937ec74/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 18 11:55:31 ha-747000 cri-dockerd[1225]: time="2024-03-18T11:55:31Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Mar 18 11:55:31 ha-747000 dockerd[1337]: time="2024-03-18T11:55:31.510729238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:55:31 ha-747000 dockerd[1337]: time="2024-03-18T11:55:31.511029839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:55:31 ha-747000 dockerd[1337]: time="2024-03-18T11:55:31.511051640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:55:31 ha-747000 dockerd[1337]: time="2024-03-18T11:55:31.511194340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a6e52cd033bfb       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   280c7276a5590       busybox-5b5d89c9d6-qvfgv
	887a467712d1e       22aaebb38f4a9                                                                                         5 minutes ago        Running             kube-vip                  1                   8ad9f7d4fa0ca       kube-vip-ha-747000
	e31c573c030a3       6e38f40d628db                                                                                         5 minutes ago        Running             storage-provisioner       1                   1d4fddfcca420       storage-provisioner
	18e55de0cc622       ead0a4a53df89                                                                                         9 minutes ago        Running             coredns                   0                   6407e6bfc72be       coredns-5dd5756b68-gm84s
	7589c531bb385       ead0a4a53df89                                                                                         9 minutes ago        Running             coredns                   0                   4bcfc9c3e4160       coredns-5dd5756b68-dhl7r
	18bee778eecc2       6e38f40d628db                                                                                         9 minutes ago        Exited              storage-provisioner       0                   1d4fddfcca420       storage-provisioner
	822e0ce0b2821       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              9 minutes ago        Running             kindnet-cni               0                   c7be2c9084768       kindnet-zt7pd
	c1abc9fd4e5d4       83f6cc407eed8                                                                                         9 minutes ago        Running             kube-proxy                0                   840eb62f8951a       kube-proxy-lp986
	7dcbde4d0d9af       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     9 minutes ago        Exited              kube-vip                  0                   8ad9f7d4fa0ca       kube-vip-ha-747000
	ca099f2ea7c45       e3db313c6dbc0                                                                                         9 minutes ago        Running             kube-scheduler            0                   fe2e158c46033       kube-scheduler-ha-747000
	74283b1900542       73deb9a3f7025                                                                                         9 minutes ago        Running             etcd                      0                   012c4fffb8fe5       etcd-ha-747000
	4aadeddfd7048       d058aa5ab969c                                                                                         9 minutes ago        Running             kube-controller-manager   0                   7ef6371bab55e       kube-controller-manager-ha-747000
	baa1747a03bf1       7fe0e6f37db33                                                                                         9 minutes ago        Running             kube-apiserver            0                   1c0df59a4be65       kube-apiserver-ha-747000
	
	
	==> coredns [18e55de0cc62] <==
	[INFO] 10.244.2.2:53469 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.001489907s
	[INFO] 10.244.2.2:39012 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.028446013s
	[INFO] 10.244.0.4:40182 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000214601s
	[INFO] 10.244.0.4:59798 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0001281s
	[INFO] 10.244.0.4:33527 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0000537s
	[INFO] 10.244.1.2:35039 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.039480666s
	[INFO] 10.244.1.2:55177 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000396802s
	[INFO] 10.244.2.2:50563 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001105s
	[INFO] 10.244.2.2:33724 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000217601s
	[INFO] 10.244.2.2:56024 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000881s
	[INFO] 10.244.2.2:52705 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135401s
	[INFO] 10.244.0.4:43180 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197501s
	[INFO] 10.244.0.4:36535 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001286s
	[INFO] 10.244.0.4:50426 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139101s
	[INFO] 10.244.0.4:53622 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070601s
	[INFO] 10.244.1.2:51913 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000728s
	[INFO] 10.244.2.2:34323 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149601s
	[INFO] 10.244.2.2:53501 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125901s
	[INFO] 10.244.0.4:48286 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001447s
	[INFO] 10.244.0.4:53461 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000669s
	[INFO] 10.244.1.2:49508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203701s
	[INFO] 10.244.1.2:60838 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000324002s
	[INFO] 10.244.2.2:39711 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000256301s
	[INFO] 10.244.0.4:35515 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001131s
	[INFO] 10.244.0.4:32994 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000055401s
	
	
	==> coredns [7589c531bb38] <==
	[INFO] 10.244.1.2:45379 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000265501s
	[INFO] 10.244.1.2:49211 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001102s
	[INFO] 10.244.1.2:57661 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001046s
	[INFO] 10.244.2.2:57466 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012422452s
	[INFO] 10.244.2.2:38205 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000252401s
	[INFO] 10.244.2.2:51817 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000052s
	[INFO] 10.244.2.2:56865 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109101s
	[INFO] 10.244.0.4:42080 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008s
	[INFO] 10.244.0.4:43666 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000077s
	[INFO] 10.244.0.4:39236 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000482s
	[INFO] 10.244.0.4:49871 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001124s
	[INFO] 10.244.1.2:42135 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196901s
	[INFO] 10.244.1.2:56767 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110901s
	[INFO] 10.244.1.2:57401 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000464802s
	[INFO] 10.244.2.2:40875 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118201s
	[INFO] 10.244.2.2:38615 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000194901s
	[INFO] 10.244.0.4:58517 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001862s
	[INFO] 10.244.0.4:44049 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000187801s
	[INFO] 10.244.1.2:36544 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000198501s
	[INFO] 10.244.1.2:48948 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000175501s
	[INFO] 10.244.2.2:57433 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162501s
	[INFO] 10.244.2.2:34928 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000357502s
	[INFO] 10.244.2.2:34698 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000187101s
	[INFO] 10.244.0.4:37314 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000353002s
	[INFO] 10.244.0.4:37416 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000521s
	
	
	==> describe nodes <==
	Name:               ha-747000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-747000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-747000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T11_46_59_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 11:46:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-747000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 11:56:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 11:56:00 +0000   Mon, 18 Mar 2024 11:46:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 11:56:00 +0000   Mon, 18 Mar 2024 11:46:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 11:56:00 +0000   Mon, 18 Mar 2024 11:46:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 11:56:00 +0000   Mon, 18 Mar 2024 11:47:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.135.65
	  Hostname:    ha-747000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 ce08ddef158b4d849587d82dc7581986
	  System UUID:                54549e36-468b-da42-bfd5-c574fa96660d
	  Boot ID:                    5a3e26a4-5428-4a7d-b86d-7b15eb0c4de5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-qvfgv             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 coredns-5dd5756b68-dhl7r             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m22s
	  kube-system                 coredns-5dd5756b68-gm84s             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m22s
	  kube-system                 etcd-ha-747000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m35s
	  kube-system                 kindnet-zt7pd                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m22s
	  kube-system                 kube-apiserver-ha-747000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m35s
	  kube-system                 kube-controller-manager-ha-747000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m35s
	  kube-system                 kube-proxy-lp986                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-scheduler-ha-747000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m35s
	  kube-system                 kube-vip-ha-747000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m35s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m20s                  kube-proxy       
	  Normal  Starting                 9m44s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     9m43s (x7 over 9m44s)  kubelet          Node ha-747000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  9m43s (x8 over 9m44s)  kubelet          Node ha-747000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m43s (x8 over 9m44s)  kubelet          Node ha-747000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 9m35s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m34s                  kubelet          Node ha-747000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m34s                  kubelet          Node ha-747000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m34s                  kubelet          Node ha-747000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m23s                  node-controller  Node ha-747000 event: Registered Node ha-747000 in Controller
	  Normal  NodeReady                9m11s                  kubelet          Node ha-747000 status is now: NodeReady
	  Normal  RegisteredNode           5m29s                  node-controller  Node ha-747000 event: Registered Node ha-747000 in Controller
	  Normal  RegisteredNode           107s                   node-controller  Node ha-747000 event: Registered Node ha-747000 in Controller
	
	
	Name:               ha-747000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-747000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-747000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T11_50_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 11:50:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-747000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 11:56:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 11:55:37 +0000   Mon, 18 Mar 2024 11:50:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 11:55:37 +0000   Mon, 18 Mar 2024 11:50:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 11:55:37 +0000   Mon, 18 Mar 2024 11:50:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 11:55:37 +0000   Mon, 18 Mar 2024 11:51:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.142.66
	  Hostname:    ha-747000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc53cac9e9e543ff962af5ea480cd266
	  System UUID:                141d4ba1-8fb9-834e-80ae-b06d45ab9958
	  Boot ID:                    9fd7c119-1d1f-4e05-950a-3108b91a27d5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-bfx2x                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 etcd-ha-747000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m58s
	  kube-system                 kindnet-czdhw                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m59s
	  kube-system                 kube-apiserver-ha-747000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 kube-controller-manager-ha-747000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 kube-proxy-zzg5q                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m59s
	  kube-system                 kube-scheduler-ha-747000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 kube-vip-ha-747000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        5m38s  kube-proxy       
	  Normal  RegisteredNode  5m58s  node-controller  Node ha-747000-m02 event: Registered Node ha-747000-m02 in Controller
	  Normal  RegisteredNode  5m29s  node-controller  Node ha-747000-m02 event: Registered Node ha-747000-m02 in Controller
	  Normal  RegisteredNode  107s   node-controller  Node ha-747000-m02 event: Registered Node ha-747000-m02 in Controller
	
	
	Name:               ha-747000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-747000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-747000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T11_54_31_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 11:54:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-747000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 11:56:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 11:55:58 +0000   Mon, 18 Mar 2024 11:54:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 11:55:58 +0000   Mon, 18 Mar 2024 11:54:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 11:55:58 +0000   Mon, 18 Mar 2024 11:54:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 11:55:58 +0000   Mon, 18 Mar 2024 11:54:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.129.111
	  Hostname:    ha-747000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee0a2b1c16414108b8b6a800e4aecfaf
	  System UUID:                701517aa-f6cd-ee4b-9a77-af927eb87fa0
	  Boot ID:                    05d6600a-6be2-4a25-9a1f-b167d44b23a4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-ln6sd                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 etcd-ha-747000-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m6s
	  kube-system                 kindnet-82v6x                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m6s
	  kube-system                 kube-apiserver-ha-747000-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                 kube-controller-manager-ha-747000-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                 kube-proxy-njpzx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m6s
	  kube-system                 kube-scheduler-ha-747000-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                 kube-vip-ha-747000-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        2m2s  kube-proxy       
	  Normal  RegisteredNode  2m4s  node-controller  Node ha-747000-m03 event: Registered Node ha-747000-m03 in Controller
	  Normal  RegisteredNode  2m2s  node-controller  Node ha-747000-m03 event: Registered Node ha-747000-m03 in Controller
	  Normal  RegisteredNode  107s  node-controller  Node ha-747000-m03 event: Registered Node ha-747000-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.057208] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.141858] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[Mar18 11:46] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.083583] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.499633] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.179648] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.185540] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +2.694503] systemd-fstab-generator[1178]: Ignoring "noauto" option for root device
	[  +0.171024] systemd-fstab-generator[1190]: Ignoring "noauto" option for root device
	[  +0.163794] systemd-fstab-generator[1202]: Ignoring "noauto" option for root device
	[  +0.242185] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[ +13.323504] systemd-fstab-generator[1323]: Ignoring "noauto" option for root device
	[  +0.099373] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.534558] systemd-fstab-generator[1523]: Ignoring "noauto" option for root device
	[  +5.653589] systemd-fstab-generator[1783]: Ignoring "noauto" option for root device
	[  +0.088517] kauditd_printk_skb: 73 callbacks suppressed
	[  +8.878414] systemd-fstab-generator[2555]: Ignoring "noauto" option for root device
	[  +0.112813] kauditd_printk_skb: 72 callbacks suppressed
	[Mar18 11:47] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.152543] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.070809] kauditd_printk_skb: 14 callbacks suppressed
	[Mar18 11:50] hrtimer: interrupt took 3154204 ns
	[  +8.797389] kauditd_printk_skb: 9 callbacks suppressed
	[  +8.263528] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [74283b190054] <==
	{"level":"info","ts":"2024-03-18T11:54:26.858894Z","caller":"traceutil/trace.go:171","msg":"trace[412900957] linearizableReadLoop","detail":"{readStateIndex:1516; appliedIndex:1516; }","duration":"241.515949ms","start":"2024-03-18T11:54:26.617364Z","end":"2024-03-18T11:54:26.85888Z","steps":["trace[412900957] 'read index received'  (duration: 241.511649ms)","trace[412900957] 'applied index is now lower than readState.Index'  (duration: 3µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T11:54:26.973966Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"356.60928ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/ha-747000-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T11:54:26.974289Z","caller":"traceutil/trace.go:171","msg":"trace[2077601542] range","detail":"{range_begin:/registry/csinodes/ha-747000-m03; range_end:; response_count:0; response_revision:1362; }","duration":"356.938885ms","start":"2024-03-18T11:54:26.617334Z","end":"2024-03-18T11:54:26.974273Z","steps":["trace[2077601542] 'agreement among raft nodes before linearized reading'  (duration: 241.664052ms)","trace[2077601542] 'range keys from in-memory index tree'  (duration: 114.902627ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T11:54:26.97433Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T11:54:26.617317Z","time spent":"357.002286ms","remote":"127.0.0.1:41996","response type":"/etcdserverpb.KV/Range","request count":0,"request size":34,"response count":0,"response size":28,"request content":"key:\"/registry/csinodes/ha-747000-m03\" "}
	{"level":"warn","ts":"2024-03-18T11:54:26.974516Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.899244ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2457433026945685746 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-idk7brhpggta4aze6msdt6cb3m\" mod_revision:1339 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-idk7brhpggta4aze6msdt6cb3m\" value_size:605 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-03-18T11:54:27.638465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7580ae0aa539a180 switched to configuration voters=(5348020654895211034 8466958660201456000 11942146101828034972)"}
	{"level":"info","ts":"2024-03-18T11:54:27.639593Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c9534c46938721ce","local-member-id":"7580ae0aa539a180","added-peer-id":"a5bb069b7f46a19c","added-peer-peer-urls":["https://172.30.129.111:2380"]}
	{"level":"info","ts":"2024-03-18T11:54:27.639788Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"a5bb069b7f46a19c"}
	{"level":"info","ts":"2024-03-18T11:54:27.639873Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"7580ae0aa539a180","remote-peer-id":"a5bb069b7f46a19c"}
	{"level":"info","ts":"2024-03-18T11:54:27.640413Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"a5bb069b7f46a19c"}
	{"level":"info","ts":"2024-03-18T11:54:27.640438Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"7580ae0aa539a180","remote-peer-id":"a5bb069b7f46a19c","remote-peer-urls":["https://172.30.129.111:2380"]}
	{"level":"info","ts":"2024-03-18T11:54:27.640678Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7580ae0aa539a180","remote-peer-id":"a5bb069b7f46a19c"}
	{"level":"info","ts":"2024-03-18T11:54:27.640716Z","caller":"etcdserver/server.go:1940","msg":"applied a configuration change through raft","local-member-id":"7580ae0aa539a180","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"a5bb069b7f46a19c"}
	{"level":"info","ts":"2024-03-18T11:54:27.640925Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"7580ae0aa539a180","remote-peer-id":"a5bb069b7f46a19c"}
	{"level":"info","ts":"2024-03-18T11:54:27.640946Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"7580ae0aa539a180","remote-peer-id":"a5bb069b7f46a19c"}
	{"level":"info","ts":"2024-03-18T11:54:27.640687Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7580ae0aa539a180","remote-peer-id":"a5bb069b7f46a19c"}
	{"level":"info","ts":"2024-03-18T11:54:29.958086Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"a5bb069b7f46a19c"}
	{"level":"info","ts":"2024-03-18T11:54:29.958151Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"7580ae0aa539a180","remote-peer-id":"a5bb069b7f46a19c"}
	{"level":"info","ts":"2024-03-18T11:54:29.96091Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7580ae0aa539a180","remote-peer-id":"a5bb069b7f46a19c"}
	{"level":"info","ts":"2024-03-18T11:54:30.006886Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"7580ae0aa539a180","to":"a5bb069b7f46a19c","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-18T11:54:30.00703Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"7580ae0aa539a180","remote-peer-id":"a5bb069b7f46a19c"}
	{"level":"info","ts":"2024-03-18T11:54:30.017403Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"7580ae0aa539a180","to":"a5bb069b7f46a19c","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-18T11:54:30.017488Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"7580ae0aa539a180","remote-peer-id":"a5bb069b7f46a19c"}
	{"level":"warn","ts":"2024-03-18T11:54:36.004321Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"278.056973ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:435"}
	{"level":"info","ts":"2024-03-18T11:54:36.005036Z","caller":"traceutil/trace.go:171","msg":"trace[844380502] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:1467; }","duration":"278.783084ms","start":"2024-03-18T11:54:35.726238Z","end":"2024-03-18T11:54:36.005021Z","steps":["trace[844380502] 'range keys from in-memory index tree'  (duration: 276.850553ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:56:33 up 11 min,  0 users,  load average: 0.94, 0.73, 0.45
	Linux ha-747000 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [822e0ce0b282] <==
	I0318 11:55:50.455080       1 main.go:250] Node ha-747000-m03 has CIDR [10.244.2.0/24] 
	I0318 11:56:00.461594       1 main.go:223] Handling node with IPs: map[172.30.135.65:{}]
	I0318 11:56:00.461724       1 main.go:227] handling current node
	I0318 11:56:00.461740       1 main.go:223] Handling node with IPs: map[172.30.142.66:{}]
	I0318 11:56:00.461816       1 main.go:250] Node ha-747000-m02 has CIDR [10.244.1.0/24] 
	I0318 11:56:00.461961       1 main.go:223] Handling node with IPs: map[172.30.129.111:{}]
	I0318 11:56:00.461973       1 main.go:250] Node ha-747000-m03 has CIDR [10.244.2.0/24] 
	I0318 11:56:10.478869       1 main.go:223] Handling node with IPs: map[172.30.135.65:{}]
	I0318 11:56:10.478969       1 main.go:227] handling current node
	I0318 11:56:10.478985       1 main.go:223] Handling node with IPs: map[172.30.142.66:{}]
	I0318 11:56:10.478994       1 main.go:250] Node ha-747000-m02 has CIDR [10.244.1.0/24] 
	I0318 11:56:10.479457       1 main.go:223] Handling node with IPs: map[172.30.129.111:{}]
	I0318 11:56:10.479542       1 main.go:250] Node ha-747000-m03 has CIDR [10.244.2.0/24] 
	I0318 11:56:20.490352       1 main.go:223] Handling node with IPs: map[172.30.135.65:{}]
	I0318 11:56:20.490830       1 main.go:227] handling current node
	I0318 11:56:20.490953       1 main.go:223] Handling node with IPs: map[172.30.142.66:{}]
	I0318 11:56:20.491141       1 main.go:250] Node ha-747000-m02 has CIDR [10.244.1.0/24] 
	I0318 11:56:20.491878       1 main.go:223] Handling node with IPs: map[172.30.129.111:{}]
	I0318 11:56:20.491931       1 main.go:250] Node ha-747000-m03 has CIDR [10.244.2.0/24] 
	I0318 11:56:30.506822       1 main.go:223] Handling node with IPs: map[172.30.135.65:{}]
	I0318 11:56:30.507162       1 main.go:227] handling current node
	I0318 11:56:30.507274       1 main.go:223] Handling node with IPs: map[172.30.142.66:{}]
	I0318 11:56:30.507287       1 main.go:250] Node ha-747000-m02 has CIDR [10.244.1.0/24] 
	I0318 11:56:30.507491       1 main.go:223] Handling node with IPs: map[172.30.129.111:{}]
	I0318 11:56:30.507555       1 main.go:250] Node ha-747000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [baa1747a03bf] <==
	Trace[889386109]: ---"Object stored in database" 35ms (11:50:50.128)
	Trace[889386109]: [5.273308412s] [5.273308412s] END
	I0318 11:50:50.132445       1 trace.go:236] Trace[21276057]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:c612c140-a01e-4ba4-b8e0-4ea95e6c89c2,client:172.30.142.66,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 11:50:43.375) (total time: 6756ms):
	Trace[21276057]: ["Create etcd3" audit-id:c612c140-a01e-4ba4-b8e0-4ea95e6c89c2,key:/pods/kube-system/kube-controller-manager-ha-747000-m02,type:*core.Pod,resource:pods 6745ms (11:50:43.386)
	Trace[21276057]:  ---"Txn call succeeded" 6698ms (11:50:50.085)]
	Trace[21276057]: ---"Write to database call failed" len:2375,err:pods "kube-controller-manager-ha-747000-m02" already exists 47ms (11:50:50.132)
	Trace[21276057]: [6.756696648s] [6.756696648s] END
	I0318 11:50:50.133218       1 trace.go:236] Trace[766829271]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:49fbfbcc-9389-4903-82b8-3ba03604b051,client:172.30.142.66,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 11:50:43.384) (total time: 6748ms):
	Trace[766829271]: ["Create etcd3" audit-id:49fbfbcc-9389-4903-82b8-3ba03604b051,key:/pods/kube-system/kube-apiserver-ha-747000-m02,type:*core.Pod,resource:pods 6743ms (11:50:43.389)
	Trace[766829271]:  ---"Txn call succeeded" 6695ms (11:50:50.085)]
	Trace[766829271]: ---"Write to database call failed" len:2991,err:pods "kube-apiserver-ha-747000-m02" already exists 47ms (11:50:50.133)
	Trace[766829271]: [6.748498834s] [6.748498834s] END
	I0318 11:50:50.134606       1 trace.go:236] Trace[516030958]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:66f0af08-17a9-4b8a-ad01-b9b9dd0b3798,client:172.30.142.66,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 11:50:43.385) (total time: 6749ms):
	Trace[516030958]: ["Create etcd3" audit-id:66f0af08-17a9-4b8a-ad01-b9b9dd0b3798,key:/pods/kube-system/kube-scheduler-ha-747000-m02,type:*core.Pod,resource:pods 6745ms (11:50:43.389)
	Trace[516030958]:  ---"Txn call succeeded" 6693ms (11:50:50.082)]
	Trace[516030958]: ---"Write to database call failed" len:1220,err:pods "kube-scheduler-ha-747000-m02" already exists 51ms (11:50:50.134)
	Trace[516030958]: [6.749280936s] [6.749280936s] END
	I0318 11:50:50.134935       1 trace.go:236] Trace[1071308089]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:8fcd5abd-fcb9-4a58-9050-e28982694fe8,client:172.30.142.66,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 11:50:43.385) (total time: 6748ms):
	Trace[1071308089]: ["Create etcd3" audit-id:8fcd5abd-fcb9-4a58-9050-e28982694fe8,key:/pods/kube-system/etcd-ha-747000-m02,type:*core.Pod,resource:pods 6745ms (11:50:43.388)
	Trace[1071308089]:  ---"Txn call succeeded" 6696ms (11:50:50.085)]
	Trace[1071308089]: ---"Write to database call failed" len:2207,err:pods "etcd-ha-747000-m02" already exists 49ms (11:50:50.134)
	Trace[1071308089]: [6.748802935s] [6.748802935s] END
	I0318 11:50:50.142404       1 trace.go:236] Trace[194465288]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.30.135.65,type:*v1.Endpoints,resource:apiServerIPInfo (18-Mar-2024 11:50:49.240) (total time: 901ms):
	Trace[194465288]: ---"initial value restored" 845ms (11:50:50.086)
	Trace[194465288]: [901.639323ms] [901.639323ms] END
	
	
	==> kube-controller-manager [4aadeddfd704] <==
	I0318 11:55:27.975861       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-ln6sd"
	I0318 11:55:28.032146       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="154.160398ms"
	I0318 11:55:28.088385       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="55.69402ms"
	I0318 11:55:28.091515       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="74.801µs"
	I0318 11:55:28.105607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="263.303µs"
	I0318 11:55:28.106937       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="300.704µs"
	I0318 11:55:28.291492       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-ft9tr"
	I0318 11:55:28.356736       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="224.810705ms"
	I0318 11:55:28.505532       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-zwnvm"
	I0318 11:55:28.521407       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-6qkmr"
	I0318 11:55:28.521495       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-g9lsd"
	I0318 11:55:28.521528       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-74mvn"
	I0318 11:55:28.522622       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-25gfh"
	I0318 11:55:28.590720       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="233.839422ms"
	I0318 11:55:28.627110       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="36.047366ms"
	I0318 11:55:28.627226       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="50.801µs"
	I0318 11:55:28.712912       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="32.876125ms"
	I0318 11:55:28.713935       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="47.201µs"
	I0318 11:55:28.865959       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="53.201µs"
	I0318 11:55:31.162699       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="19.10518ms"
	I0318 11:55:31.164367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="30.9µs"
	I0318 11:55:32.006557       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="68.393185ms"
	I0318 11:55:32.006873       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="134.301µs"
	I0318 11:55:32.355805       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="21.327989ms"
	I0318 11:55:32.358391       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="2.45611ms"
	
	
	==> kube-proxy [c1abc9fd4e5d] <==
	I0318 11:47:12.874229       1 server_others.go:69] "Using iptables proxy"
	I0318 11:47:12.891619       1 node.go:141] Successfully retrieved node IP: 172.30.135.65
	I0318 11:47:12.982620       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 11:47:12.982724       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 11:47:12.987269       1 server_others.go:152] "Using iptables Proxier"
	I0318 11:47:12.987438       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 11:47:12.988545       1 server.go:846] "Version info" version="v1.28.4"
	I0318 11:47:12.988631       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 11:47:12.991538       1 config.go:188] "Starting service config controller"
	I0318 11:47:12.991735       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 11:47:12.991842       1 config.go:97] "Starting endpoint slice config controller"
	I0318 11:47:12.991919       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 11:47:12.993004       1 config.go:315] "Starting node config controller"
	I0318 11:47:12.993176       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 11:47:13.092953       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 11:47:13.092961       1 shared_informer.go:318] Caches are synced for service config
	I0318 11:47:13.093649       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [ca099f2ea7c4] <==
	W0318 11:46:55.980584       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 11:46:55.980611       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 11:46:56.002422       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 11:46:56.002552       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 11:46:56.019052       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 11:46:56.019092       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 11:46:56.101868       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 11:46:56.101940       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 11:46:56.123495       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 11:46:56.123634       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 11:46:56.328059       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 11:46:56.328212       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 11:46:56.336959       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 11:46:56.337015       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 11:46:56.370944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 11:46:56.370987       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 11:46:56.378864       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 11:46:56.379183       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 11:46:56.393905       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 11:46:56.394084       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 11:46:58.092942       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0318 11:55:28.041529       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-bfx2x\": pod busybox-5b5d89c9d6-bfx2x is already assigned to node \"ha-747000-m02\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-bfx2x" node="ha-747000-m02"
	E0318 11:55:28.043346       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod f64d5cfe-1d7c-41d7-8dd8-779eee53eaf2(default/busybox-5b5d89c9d6-bfx2x) wasn't assumed so cannot be forgotten"
	E0318 11:55:28.045884       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-bfx2x\": pod busybox-5b5d89c9d6-bfx2x is already assigned to node \"ha-747000-m02\"" pod="default/busybox-5b5d89c9d6-bfx2x"
	I0318 11:55:28.047121       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-bfx2x" node="ha-747000-m02"
	
	
	==> kubelet <==
	Mar 18 11:52:58 ha-747000 kubelet[2576]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 11:52:58 ha-747000 kubelet[2576]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 11:53:58 ha-747000 kubelet[2576]: E0318 11:53:58.888178    2576 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 11:53:58 ha-747000 kubelet[2576]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 11:53:58 ha-747000 kubelet[2576]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 11:53:58 ha-747000 kubelet[2576]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 11:53:58 ha-747000 kubelet[2576]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 11:54:58 ha-747000 kubelet[2576]: E0318 11:54:58.889691    2576 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 11:54:58 ha-747000 kubelet[2576]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 11:54:58 ha-747000 kubelet[2576]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 11:54:58 ha-747000 kubelet[2576]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 11:54:58 ha-747000 kubelet[2576]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 11:55:28 ha-747000 kubelet[2576]: I0318 11:55:28.005085    2576 topology_manager.go:215] "Topology Admit Handler" podUID="5e248e3c-3bf8-4136-a0c1-8b864f64b098" podNamespace="default" podName="busybox-5b5d89c9d6-qvfgv"
	Mar 18 11:55:28 ha-747000 kubelet[2576]: W0318 11:55:28.026522    2576 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-747000" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-747000' and this object
	Mar 18 11:55:28 ha-747000 kubelet[2576]: E0318 11:55:28.036283    2576 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-747000" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-747000' and this object
	Mar 18 11:55:28 ha-747000 kubelet[2576]: I0318 11:55:28.099816    2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjxvc\" (UniqueName: \"kubernetes.io/projected/5e248e3c-3bf8-4136-a0c1-8b864f64b098-kube-api-access-kjxvc\") pod \"busybox-5b5d89c9d6-qvfgv\" (UID: \"5e248e3c-3bf8-4136-a0c1-8b864f64b098\") " pod="default/busybox-5b5d89c9d6-qvfgv"
	Mar 18 11:55:29 ha-747000 kubelet[2576]: E0318 11:55:29.309788    2576 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Mar 18 11:55:29 ha-747000 kubelet[2576]: E0318 11:55:29.309845    2576 projected.go:198] Error preparing data for projected volume kube-api-access-kjxvc for pod default/busybox-5b5d89c9d6-qvfgv: failed to sync configmap cache: timed out waiting for the condition
	Mar 18 11:55:29 ha-747000 kubelet[2576]: E0318 11:55:29.311869    2576 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e248e3c-3bf8-4136-a0c1-8b864f64b098-kube-api-access-kjxvc podName:5e248e3c-3bf8-4136-a0c1-8b864f64b098 nodeName:}" failed. No retries permitted until 2024-03-18 11:55:29.810704087 +0000 UTC m=+511.246563687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kjxvc" (UniqueName: "kubernetes.io/projected/5e248e3c-3bf8-4136-a0c1-8b864f64b098-kube-api-access-kjxvc") pod "busybox-5b5d89c9d6-qvfgv" (UID: "5e248e3c-3bf8-4136-a0c1-8b864f64b098") : failed to sync configmap cache: timed out waiting for the condition
	Mar 18 11:55:30 ha-747000 kubelet[2576]: I0318 11:55:30.263868    2576 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="280c7276a55904857afc7a24b65f72bde51b05280c8d6a2201ccac07f937ec74"
	Mar 18 11:55:58 ha-747000 kubelet[2576]: E0318 11:55:58.885521    2576 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 11:55:58 ha-747000 kubelet[2576]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 11:55:58 ha-747000 kubelet[2576]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 11:55:58 ha-747000 kubelet[2576]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 11:55:58 ha-747000 kubelet[2576]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:56:25.381509   10044 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-747000 -n ha-747000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-747000 -n ha-747000: (11.7354952s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-747000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (66.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (139.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-747000 node start m02 -v=7 --alsologtostderr: exit status 1 (54.9434977s)

                                                
                                                
-- stdout --
	* Starting "ha-747000-m02" control-plane node in "ha-747000" cluster
	* Restarting existing hyperv VM for "ha-747000-m02" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 12:13:12.047129    7340 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0318 12:13:12.120348    7340 out.go:291] Setting OutFile to fd 980 ...
	I0318 12:13:12.142063    7340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:13:12.142063    7340 out.go:304] Setting ErrFile to fd 596...
	I0318 12:13:12.142063    7340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:13:12.156388    7340 mustload.go:65] Loading cluster: ha-747000
	I0318 12:13:12.157014    7340 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:13:12.157785    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 12:13:14.188293    7340 main.go:141] libmachine: [stdout =====>] : Off
	
	I0318 12:13:14.188293    7340 main.go:141] libmachine: [stderr =====>] : 
	W0318 12:13:14.188293    7340 host.go:58] "ha-747000-m02" host status: Stopped
	I0318 12:13:14.192276    7340 out.go:177] * Starting "ha-747000-m02" control-plane node in "ha-747000" cluster
	I0318 12:13:14.195713    7340 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 12:13:14.195885    7340 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 12:13:14.195982    7340 cache.go:56] Caching tarball of preloaded images
	I0318 12:13:14.196424    7340 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 12:13:14.196424    7340 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 12:13:14.196424    7340 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json ...
	I0318 12:13:14.198568    7340 start.go:360] acquireMachinesLock for ha-747000-m02: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 12:13:14.198568    7340 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-747000-m02"
	I0318 12:13:14.198568    7340 start.go:96] Skipping create...Using existing machine configuration
	I0318 12:13:14.199615    7340 fix.go:54] fixHost starting: m02
	I0318 12:13:14.199792    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 12:13:16.209688    7340 main.go:141] libmachine: [stdout =====>] : Off
	
	I0318 12:13:16.209688    7340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:13:16.209688    7340 fix.go:112] recreateIfNeeded on ha-747000-m02: state=Stopped err=<nil>
	W0318 12:13:16.209688    7340 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 12:13:16.213590    7340 out.go:177] * Restarting existing hyperv VM for "ha-747000-m02" ...
	I0318 12:13:16.216548    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-747000-m02
	I0318 12:13:19.145913    7340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:13:19.145913    7340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:13:19.145913    7340 main.go:141] libmachine: Waiting for host to start...
	I0318 12:13:19.146014    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 12:13:21.310537    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:13:21.310537    7340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:13:21.310664    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:13:23.710152    7340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:13:23.712825    7340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:13:24.731131    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 12:13:26.874156    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:13:26.885952    7340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:13:26.886071    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:13:29.337332    7340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:13:29.337332    7340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:13:30.345641    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 12:13:32.421185    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:13:32.423225    7340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:13:32.423418    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:13:34.878960    7340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:13:34.878960    7340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:13:35.886555    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 12:13:37.992474    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:13:37.992474    7340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:13:37.992770    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:13:40.391413    7340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:13:40.391413    7340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:13:41.411918    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 12:13:43.463322    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:13:43.474639    7340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:13:43.475079    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:13:45.884434    7340 main.go:141] libmachine: [stdout =====>] : 172.30.143.34
	
	I0318 12:13:45.884684    7340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:13:45.887157    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 12:13:47.901551    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:13:47.901551    7340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:13:47.901551    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:13:50.355227    7340 main.go:141] libmachine: [stdout =====>] : 172.30.143.34
	
	I0318 12:13:50.355536    7340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:13:50.355686    7340 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json ...
	I0318 12:13:50.358259    7340 machine.go:94] provisionDockerMachine start ...
	I0318 12:13:50.358949    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 12:13:52.362135    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:13:52.362484    7340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:13:52.362580    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:13:54.779277    7340 main.go:141] libmachine: [stdout =====>] : 172.30.143.34
	
	I0318 12:13:54.782679    7340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:13:54.788461    7340 main.go:141] libmachine: Using SSH client type: native
	I0318 12:13:54.789172    7340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.143.34 22 <nil> <nil>}
	I0318 12:13:54.789172    7340 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 12:13:54.916141    7340 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 12:13:54.916141    7340 buildroot.go:166] provisioning hostname "ha-747000-m02"
	I0318 12:13:54.916141    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 12:13:56.943827    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:13:56.951791    7340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:13:56.951991    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:13:59.408255    7340 main.go:141] libmachine: [stdout =====>] : 172.30.143.34
	
	I0318 12:13:59.408255    7340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:13:59.425422    7340 main.go:141] libmachine: Using SSH client type: native
	I0318 12:13:59.425996    7340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.143.34 22 <nil> <nil>}
	I0318 12:13:59.425996    7340 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-747000-m02 && echo "ha-747000-m02" | sudo tee /etc/hostname
	I0318 12:13:59.576245    7340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-747000-m02
	
	I0318 12:13:59.576324    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 12:14:01.616033    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:14:01.628268    7340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:14:01.628361    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:14:04.066884    7340 main.go:141] libmachine: [stdout =====>] : 172.30.143.34
	
	I0318 12:14:04.066884    7340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:14:04.085543    7340 main.go:141] libmachine: Using SSH client type: native
	I0318 12:14:04.085746    7340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.143.34 22 <nil> <nil>}
	I0318 12:14:04.085746    7340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-747000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-747000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-747000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 12:14:04.224791    7340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:14:04.224791    7340 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0318 12:14:04.224791    7340 buildroot.go:174] setting up certificates
	I0318 12:14:04.224791    7340 provision.go:84] configureAuth start
	I0318 12:14:04.224791    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 12:14:06.285464    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:14:06.285506    7340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:14:06.285575    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
ha_test.go:422: W0318 12:13:12.047129    7340 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0318 12:13:12.120348    7340 out.go:291] Setting OutFile to fd 980 ...
I0318 12:13:12.142063    7340 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 12:13:12.142063    7340 out.go:304] Setting ErrFile to fd 596...
I0318 12:13:12.142063    7340 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 12:13:12.156388    7340 mustload.go:65] Loading cluster: ha-747000
I0318 12:13:12.157014    7340 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 12:13:12.157785    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
I0318 12:13:14.188293    7340 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0318 12:13:14.188293    7340 main.go:141] libmachine: [stderr =====>] : 
W0318 12:13:14.188293    7340 host.go:58] "ha-747000-m02" host status: Stopped
I0318 12:13:14.192276    7340 out.go:177] * Starting "ha-747000-m02" control-plane node in "ha-747000" cluster
I0318 12:13:14.195713    7340 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0318 12:13:14.195885    7340 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
I0318 12:13:14.195982    7340 cache.go:56] Caching tarball of preloaded images
I0318 12:13:14.196424    7340 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0318 12:13:14.196424    7340 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I0318 12:13:14.196424    7340 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json ...
I0318 12:13:14.198568    7340 start.go:360] acquireMachinesLock for ha-747000-m02: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0318 12:13:14.198568    7340 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-747000-m02"
I0318 12:13:14.198568    7340 start.go:96] Skipping create...Using existing machine configuration
I0318 12:13:14.199615    7340 fix.go:54] fixHost starting: m02
I0318 12:13:14.199792    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
I0318 12:13:16.209688    7340 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0318 12:13:16.209688    7340 main.go:141] libmachine: [stderr =====>] : 
I0318 12:13:16.209688    7340 fix.go:112] recreateIfNeeded on ha-747000-m02: state=Stopped err=<nil>
W0318 12:13:16.209688    7340 fix.go:138] unexpected machine state, will restart: <nil>
I0318 12:13:16.213590    7340 out.go:177] * Restarting existing hyperv VM for "ha-747000-m02" ...
I0318 12:13:16.216548    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-747000-m02
I0318 12:13:19.145913    7340 main.go:141] libmachine: [stdout =====>] : 
I0318 12:13:19.145913    7340 main.go:141] libmachine: [stderr =====>] : 
I0318 12:13:19.145913    7340 main.go:141] libmachine: Waiting for host to start...
I0318 12:13:19.146014    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
I0318 12:13:21.310537    7340 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 12:13:21.310537    7340 main.go:141] libmachine: [stderr =====>] : 
I0318 12:13:21.310664    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
I0318 12:13:23.710152    7340 main.go:141] libmachine: [stdout =====>] : 
I0318 12:13:23.712825    7340 main.go:141] libmachine: [stderr =====>] : 
I0318 12:13:24.731131    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
I0318 12:13:26.874156    7340 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 12:13:26.885952    7340 main.go:141] libmachine: [stderr =====>] : 
I0318 12:13:26.886071    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
I0318 12:13:29.337332    7340 main.go:141] libmachine: [stdout =====>] : 
I0318 12:13:29.337332    7340 main.go:141] libmachine: [stderr =====>] : 
I0318 12:13:30.345641    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
I0318 12:13:32.421185    7340 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 12:13:32.423225    7340 main.go:141] libmachine: [stderr =====>] : 
I0318 12:13:32.423418    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
I0318 12:13:34.878960    7340 main.go:141] libmachine: [stdout =====>] : 
I0318 12:13:34.878960    7340 main.go:141] libmachine: [stderr =====>] : 
I0318 12:13:35.886555    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
I0318 12:13:37.992474    7340 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 12:13:37.992474    7340 main.go:141] libmachine: [stderr =====>] : 
I0318 12:13:37.992770    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
I0318 12:13:40.391413    7340 main.go:141] libmachine: [stdout =====>] : 
I0318 12:13:40.391413    7340 main.go:141] libmachine: [stderr =====>] : 
I0318 12:13:41.411918    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
I0318 12:13:43.463322    7340 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 12:13:43.474639    7340 main.go:141] libmachine: [stderr =====>] : 
I0318 12:13:43.475079    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
I0318 12:13:45.884434    7340 main.go:141] libmachine: [stdout =====>] : 172.30.143.34

                                                
                                                
I0318 12:13:45.884684    7340 main.go:141] libmachine: [stderr =====>] : 
I0318 12:13:45.887157    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
I0318 12:13:47.901551    7340 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 12:13:47.901551    7340 main.go:141] libmachine: [stderr =====>] : 
I0318 12:13:47.901551    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
I0318 12:13:50.355227    7340 main.go:141] libmachine: [stdout =====>] : 172.30.143.34

                                                
                                                
I0318 12:13:50.355536    7340 main.go:141] libmachine: [stderr =====>] : 
I0318 12:13:50.355686    7340 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json ...
I0318 12:13:50.358259    7340 machine.go:94] provisionDockerMachine start ...
I0318 12:13:50.358949    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
I0318 12:13:52.362135    7340 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 12:13:52.362484    7340 main.go:141] libmachine: [stderr =====>] : 
I0318 12:13:52.362580    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
I0318 12:13:54.779277    7340 main.go:141] libmachine: [stdout =====>] : 172.30.143.34

                                                
                                                
I0318 12:13:54.782679    7340 main.go:141] libmachine: [stderr =====>] : 
I0318 12:13:54.788461    7340 main.go:141] libmachine: Using SSH client type: native
I0318 12:13:54.789172    7340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.143.34 22 <nil> <nil>}
I0318 12:13:54.789172    7340 main.go:141] libmachine: About to run SSH command:
hostname
I0318 12:13:54.916141    7340 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0318 12:13:54.916141    7340 buildroot.go:166] provisioning hostname "ha-747000-m02"
I0318 12:13:54.916141    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
I0318 12:13:56.943827    7340 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 12:13:56.951791    7340 main.go:141] libmachine: [stderr =====>] : 
I0318 12:13:56.951991    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
I0318 12:13:59.408255    7340 main.go:141] libmachine: [stdout =====>] : 172.30.143.34

                                                
                                                
I0318 12:13:59.408255    7340 main.go:141] libmachine: [stderr =====>] : 
I0318 12:13:59.425422    7340 main.go:141] libmachine: Using SSH client type: native
I0318 12:13:59.425996    7340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.143.34 22 <nil> <nil>}
I0318 12:13:59.425996    7340 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-747000-m02 && echo "ha-747000-m02" | sudo tee /etc/hostname
I0318 12:13:59.576245    7340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-747000-m02

                                                
                                                
I0318 12:13:59.576324    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
I0318 12:14:01.616033    7340 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 12:14:01.628268    7340 main.go:141] libmachine: [stderr =====>] : 
I0318 12:14:01.628361    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
I0318 12:14:04.066884    7340 main.go:141] libmachine: [stdout =====>] : 172.30.143.34

                                                
                                                
I0318 12:14:04.066884    7340 main.go:141] libmachine: [stderr =====>] : 
I0318 12:14:04.085543    7340 main.go:141] libmachine: Using SSH client type: native
I0318 12:14:04.085746    7340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.143.34 22 <nil> <nil>}
I0318 12:14:04.085746    7340 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-747000-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-747000-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-747000-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I0318 12:14:04.224791    7340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0318 12:14:04.224791    7340 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
I0318 12:14:04.224791    7340 buildroot.go:174] setting up certificates
I0318 12:14:04.224791    7340 provision.go:84] configureAuth start
I0318 12:14:04.224791    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
I0318 12:14:06.285464    7340 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 12:14:06.285506    7340 main.go:141] libmachine: [stderr =====>] : 
I0318 12:14:06.285575    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-windows-amd64.exe -p ha-747000 node start m02 -v=7 --alsologtostderr": exit status 1
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr: context deadline exceeded (62.4µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr: context deadline exceeded (178.5µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr: context deadline exceeded (106.4µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:432: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-747000 -n ha-747000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-747000 -n ha-747000: (12.0715462s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 logs -n 25: (8.4877405s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| ssh     | ha-747000 ssh -n                                                                                                          | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:07 UTC | 18 Mar 24 12:07 UTC |
	|         | ha-747000-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-747000 cp ha-747000-m03:/home/docker/cp-test.txt                                                                       | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:07 UTC | 18 Mar 24 12:07 UTC |
	|         | ha-747000:/home/docker/cp-test_ha-747000-m03_ha-747000.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-747000 ssh -n                                                                                                          | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:07 UTC | 18 Mar 24 12:07 UTC |
	|         | ha-747000-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-747000 ssh -n ha-747000 sudo cat                                                                                       | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:07 UTC | 18 Mar 24 12:08 UTC |
	|         | /home/docker/cp-test_ha-747000-m03_ha-747000.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-747000 cp ha-747000-m03:/home/docker/cp-test.txt                                                                       | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:08 UTC | 18 Mar 24 12:08 UTC |
	|         | ha-747000-m02:/home/docker/cp-test_ha-747000-m03_ha-747000-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-747000 ssh -n                                                                                                          | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:08 UTC | 18 Mar 24 12:08 UTC |
	|         | ha-747000-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-747000 ssh -n ha-747000-m02 sudo cat                                                                                   | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:08 UTC | 18 Mar 24 12:08 UTC |
	|         | /home/docker/cp-test_ha-747000-m03_ha-747000-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-747000 cp ha-747000-m03:/home/docker/cp-test.txt                                                                       | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:08 UTC | 18 Mar 24 12:08 UTC |
	|         | ha-747000-m04:/home/docker/cp-test_ha-747000-m03_ha-747000-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-747000 ssh -n                                                                                                          | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:08 UTC | 18 Mar 24 12:09 UTC |
	|         | ha-747000-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-747000 ssh -n ha-747000-m04 sudo cat                                                                                   | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:09 UTC | 18 Mar 24 12:09 UTC |
	|         | /home/docker/cp-test_ha-747000-m03_ha-747000-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-747000 cp testdata\cp-test.txt                                                                                         | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:09 UTC | 18 Mar 24 12:09 UTC |
	|         | ha-747000-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-747000 ssh -n                                                                                                          | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:09 UTC | 18 Mar 24 12:09 UTC |
	|         | ha-747000-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-747000 cp ha-747000-m04:/home/docker/cp-test.txt                                                                       | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:09 UTC | 18 Mar 24 12:09 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3888916617\001\cp-test_ha-747000-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-747000 ssh -n                                                                                                          | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:09 UTC | 18 Mar 24 12:09 UTC |
	|         | ha-747000-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-747000 cp ha-747000-m04:/home/docker/cp-test.txt                                                                       | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:09 UTC | 18 Mar 24 12:10 UTC |
	|         | ha-747000:/home/docker/cp-test_ha-747000-m04_ha-747000.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-747000 ssh -n                                                                                                          | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:10 UTC | 18 Mar 24 12:10 UTC |
	|         | ha-747000-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-747000 ssh -n ha-747000 sudo cat                                                                                       | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:10 UTC | 18 Mar 24 12:10 UTC |
	|         | /home/docker/cp-test_ha-747000-m04_ha-747000.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-747000 cp ha-747000-m04:/home/docker/cp-test.txt                                                                       | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:10 UTC | 18 Mar 24 12:10 UTC |
	|         | ha-747000-m02:/home/docker/cp-test_ha-747000-m04_ha-747000-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-747000 ssh -n                                                                                                          | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:10 UTC | 18 Mar 24 12:10 UTC |
	|         | ha-747000-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-747000 ssh -n ha-747000-m02 sudo cat                                                                                   | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:10 UTC | 18 Mar 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-747000-m04_ha-747000-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-747000 cp ha-747000-m04:/home/docker/cp-test.txt                                                                       | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:11 UTC | 18 Mar 24 12:11 UTC |
	|         | ha-747000-m03:/home/docker/cp-test_ha-747000-m04_ha-747000-m03.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-747000 ssh -n                                                                                                          | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:11 UTC | 18 Mar 24 12:11 UTC |
	|         | ha-747000-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-747000 ssh -n ha-747000-m03 sudo cat                                                                                   | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:11 UTC | 18 Mar 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-747000-m04_ha-747000-m03.txt                                                                      |           |                   |         |                     |                     |
	| node    | ha-747000 node stop m02 -v=7                                                                                              | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:11 UTC | 18 Mar 24 12:12 UTC |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	| node    | ha-747000 node start m02 -v=7                                                                                             | ha-747000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:13 UTC |                     |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 11:44:07
	Running on machine: minikube3
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 11:44:07.032139   13008 out.go:291] Setting OutFile to fd 800 ...
	I0318 11:44:07.032389   13008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:44:07.032389   13008 out.go:304] Setting ErrFile to fd 1020...
	I0318 11:44:07.032389   13008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:44:07.051810   13008 out.go:298] Setting JSON to false
	I0318 11:44:07.055460   13008 start.go:129] hostinfo: {"hostname":"minikube3","uptime":310824,"bootTime":1710451423,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0318 11:44:07.055460   13008 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 11:44:07.065824   13008 out.go:177] * [ha-747000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0318 11:44:07.070739   13008 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 11:44:07.069679   13008 notify.go:220] Checking for updates...
	I0318 11:44:07.075794   13008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 11:44:07.076618   13008 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0318 11:44:07.081479   13008 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 11:44:07.082508   13008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 11:44:07.085163   13008 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 11:44:11.826968   13008 out.go:177] * Using the hyperv driver based on user configuration
	I0318 11:44:11.830802   13008 start.go:297] selected driver: hyperv
	I0318 11:44:11.830802   13008 start.go:901] validating driver "hyperv" against <nil>
	I0318 11:44:11.830802   13008 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 11:44:11.878104   13008 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 11:44:11.878988   13008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 11:44:11.878988   13008 cni.go:84] Creating CNI manager for ""
	I0318 11:44:11.878988   13008 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0318 11:44:11.878988   13008 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 11:44:11.880273   13008 start.go:340] cluster config:
	{Name:ha-747000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-747000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:44:11.880548   13008 iso.go:125] acquiring lock: {Name:mkefa7a887b9de67c22e0418da52288a5b488924 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 11:44:11.884916   13008 out.go:177] * Starting "ha-747000" primary control-plane node in "ha-747000" cluster
	I0318 11:44:11.887439   13008 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:44:11.887575   13008 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 11:44:11.887575   13008 cache.go:56] Caching tarball of preloaded images
	I0318 11:44:11.887575   13008 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 11:44:11.888382   13008 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 11:44:11.888382   13008 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json ...
	I0318 11:44:11.889119   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json: {Name:mkd01eb0d386b6895348db840d4e4956154276ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:44:11.890247   13008 start.go:360] acquireMachinesLock for ha-747000: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 11:44:11.890247   13008 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-747000"
	I0318 11:44:11.890874   13008 start.go:93] Provisioning new machine with config: &{Name:ha-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-747000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:44:11.891055   13008 start.go:125] createHost starting for "" (driver="hyperv")
	I0318 11:44:11.893440   13008 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 11:44:11.893980   13008 start.go:159] libmachine.API.Create for "ha-747000" (driver="hyperv")
	I0318 11:44:11.893980   13008 client.go:168] LocalClient.Create starting
	I0318 11:44:11.894581   13008 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0318 11:44:11.894581   13008 main.go:141] libmachine: Decoding PEM data...
	I0318 11:44:11.894581   13008 main.go:141] libmachine: Parsing certificate...
	I0318 11:44:11.894581   13008 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0318 11:44:11.895261   13008 main.go:141] libmachine: Decoding PEM data...
	I0318 11:44:11.895261   13008 main.go:141] libmachine: Parsing certificate...
	I0318 11:44:11.895408   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0318 11:44:13.657992   13008 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0318 11:44:13.657992   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:13.658107   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0318 11:44:15.164283   13008 main.go:141] libmachine: [stdout =====>] : False
	
	I0318 11:44:15.164283   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:15.164283   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:44:16.522292   13008 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:44:16.528578   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:16.528754   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:44:19.670864   13008 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:44:19.670953   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:19.673909   13008 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 11:44:20.054081   13008 main.go:141] libmachine: Creating SSH key...
	I0318 11:44:20.145667   13008 main.go:141] libmachine: Creating VM...
	I0318 11:44:20.145667   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:44:22.708445   13008 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:44:22.719546   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:22.719546   13008 main.go:141] libmachine: Using switch "Default Switch"
	I0318 11:44:22.719546   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:44:24.223539   13008 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:44:24.230902   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:24.230902   13008 main.go:141] libmachine: Creating VHD
	I0318 11:44:24.230902   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0318 11:44:27.618811   13008 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 87226964-6452-40BB-8CCF-F54D1DA7593F
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0318 11:44:27.618811   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:27.630006   13008 main.go:141] libmachine: Writing magic tar header
	I0318 11:44:27.630006   13008 main.go:141] libmachine: Writing SSH key tar header
	I0318 11:44:27.638679   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0318 11:44:30.588703   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:44:30.589017   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:30.589287   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\disk.vhd' -SizeBytes 20000MB
	I0318 11:44:32.866741   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:44:32.866741   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:32.877615   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-747000 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0318 11:44:36.184208   13008 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-747000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0318 11:44:36.184208   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:36.184208   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-747000 -DynamicMemoryEnabled $false
	I0318 11:44:38.135542   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:44:38.135542   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:38.146281   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-747000 -Count 2
	I0318 11:44:40.079602   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:44:40.079602   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:40.079602   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-747000 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\boot2docker.iso'
	I0318 11:44:42.364784   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:44:42.376312   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:42.376312   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-747000 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\disk.vhd'
	I0318 11:44:44.689877   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:44:44.689877   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:44.700815   13008 main.go:141] libmachine: Starting VM...
	I0318 11:44:44.700815   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-747000
	I0318 11:44:47.538271   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:44:47.541027   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:47.541027   13008 main.go:141] libmachine: Waiting for host to start...
	I0318 11:44:47.541130   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:44:49.572738   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:44:49.572738   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:49.579498   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:44:51.850944   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:44:51.850944   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:52.859256   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:44:54.900759   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:44:54.907213   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:54.907213   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:44:57.234254   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:44:57.234312   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:44:58.248759   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:00.276076   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:00.276199   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:00.276335   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:02.673181   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:45:02.673419   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:03.679813   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:05.696960   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:05.701490   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:05.701490   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:08.050571   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:45:08.050571   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:09.061208   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:11.088085   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:11.088162   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:11.088162   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:13.349614   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:45:13.349614   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:13.349614   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:15.192650   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:15.192737   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:15.192737   13008 machine.go:94] provisionDockerMachine start ...
	I0318 11:45:15.192737   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:17.160963   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:17.171920   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:17.171920   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:19.425519   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:45:19.425519   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:19.442199   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:45:19.449110   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.135.65 22 <nil> <nil>}
	I0318 11:45:19.449110   13008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 11:45:19.566466   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 11:45:19.566466   13008 buildroot.go:166] provisioning hostname "ha-747000"
	I0318 11:45:19.566466   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:21.478083   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:21.478083   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:21.478083   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:23.705343   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:45:23.716166   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:23.722292   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:45:23.722292   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.135.65 22 <nil> <nil>}
	I0318 11:45:23.722292   13008 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-747000 && echo "ha-747000" | sudo tee /etc/hostname
	I0318 11:45:23.860797   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-747000
	
	I0318 11:45:23.860797   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:25.744527   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:25.757345   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:25.757345   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:28.017495   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:45:28.027634   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:28.033238   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:45:28.033442   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.135.65 22 <nil> <nil>}
	I0318 11:45:28.033442   13008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-747000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-747000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-747000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 11:45:28.163753   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 11:45:28.163753   13008 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0318 11:45:28.163753   13008 buildroot.go:174] setting up certificates
	I0318 11:45:28.163753   13008 provision.go:84] configureAuth start
	I0318 11:45:28.163753   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:30.027731   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:30.027731   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:30.027731   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:32.280153   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:45:32.280153   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:32.289594   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:34.182838   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:34.193025   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:34.193025   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:36.463235   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:45:36.463235   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:36.466513   13008 provision.go:143] copyHostCerts
	I0318 11:45:36.466662   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0318 11:45:36.466859   13008 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0318 11:45:36.466859   13008 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0318 11:45:36.467402   13008 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 11:45:36.468119   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0318 11:45:36.468886   13008 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0318 11:45:36.468886   13008 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0318 11:45:36.469132   13008 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 11:45:36.470097   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0318 11:45:36.470934   13008 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0318 11:45:36.470934   13008 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0318 11:45:36.471344   13008 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 11:45:36.472363   13008 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-747000 san=[127.0.0.1 172.30.135.65 ha-747000 localhost minikube]
	I0318 11:45:37.046899   13008 provision.go:177] copyRemoteCerts
	I0318 11:45:37.059060   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 11:45:37.059060   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:38.904612   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:38.914249   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:38.914249   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:41.130809   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:45:41.130809   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:41.141594   13008 sshutil.go:53] new ssh client: &{IP:172.30.135.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\id_rsa Username:docker}
	I0318 11:45:41.242209   13008 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1831178s)
	I0318 11:45:41.242209   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 11:45:41.242889   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 11:45:41.277406   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 11:45:41.277406   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0318 11:45:41.316259   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 11:45:41.316511   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 11:45:41.356624   13008 provision.go:87] duration metric: took 13.1927742s to configureAuth
	I0318 11:45:41.356811   13008 buildroot.go:189] setting minikube options for container-runtime
	I0318 11:45:41.357355   13008 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:45:41.357484   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:43.233188   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:43.242989   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:43.242989   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:45.460315   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:45:45.470734   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:45.475858   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:45:45.475858   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.135.65 22 <nil> <nil>}
	I0318 11:45:45.475858   13008 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 11:45:45.600089   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 11:45:45.600089   13008 buildroot.go:70] root file system type: tmpfs
	I0318 11:45:45.600089   13008 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 11:45:45.600089   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:47.414679   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:47.414679   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:47.426567   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:49.680235   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:45:49.680421   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:49.685834   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:45:49.685834   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.135.65 22 <nil> <nil>}
	I0318 11:45:49.686416   13008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 11:45:49.831730   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 11:45:49.831730   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:51.716195   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:51.716195   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:51.726884   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:45:53.992793   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:45:53.992793   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:54.010416   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:45:54.010663   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.135.65 22 <nil> <nil>}
	I0318 11:45:54.010663   13008 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 11:45:55.996206   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 11:45:55.996273   13008 machine.go:97] duration metric: took 40.8032344s to provisionDockerMachine
	I0318 11:45:55.996273   13008 client.go:171] duration metric: took 1m44.1013705s to LocalClient.Create
	I0318 11:45:55.996331   13008 start.go:167] duration metric: took 1m44.1015789s to libmachine.API.Create "ha-747000"
	I0318 11:45:55.996331   13008 start.go:293] postStartSetup for "ha-747000" (driver="hyperv")
	I0318 11:45:55.996331   13008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 11:45:56.007051   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 11:45:56.007051   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:45:57.872522   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:45:57.882834   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:45:57.882834   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:46:00.116571   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:46:00.116571   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:00.122265   13008 sshutil.go:53] new ssh client: &{IP:172.30.135.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\id_rsa Username:docker}
	I0318 11:46:00.222244   13008 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2149519s)
	I0318 11:46:00.234801   13008 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 11:46:00.241368   13008 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 11:46:00.241639   13008 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0318 11:46:00.242182   13008 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0318 11:46:00.243699   13008 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> 134242.pem in /etc/ssl/certs
	I0318 11:46:00.243804   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /etc/ssl/certs/134242.pem
	I0318 11:46:00.254597   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 11:46:00.270755   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /etc/ssl/certs/134242.pem (1708 bytes)
	I0318 11:46:00.312647   13008 start.go:296] duration metric: took 4.3162839s for postStartSetup
	I0318 11:46:00.316829   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:46:02.184402   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:46:02.184611   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:02.184757   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:46:04.442373   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:46:04.442373   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:04.453229   13008 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json ...
	I0318 11:46:04.456240   13008 start.go:128] duration metric: took 1m52.5642822s to createHost
	I0318 11:46:04.456240   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:46:06.329235   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:46:06.329235   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:06.340563   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:46:08.556116   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:46:08.556177   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:08.563590   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:46:08.564213   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.135.65 22 <nil> <nil>}
	I0318 11:46:08.564213   13008 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 11:46:08.682748   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710762368.674503471
	
	I0318 11:46:08.682748   13008 fix.go:216] guest clock: 1710762368.674503471
	I0318 11:46:08.682748   13008 fix.go:229] Guest: 2024-03-18 11:46:08.674503471 +0000 UTC Remote: 2024-03-18 11:46:04.4562409 +0000 UTC m=+117.579389501 (delta=4.218262571s)
	I0318 11:46:08.682748   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:46:10.576168   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:46:10.589433   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:10.589586   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:46:12.846986   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:46:12.846986   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:12.853145   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:46:12.854057   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.135.65 22 <nil> <nil>}
	I0318 11:46:12.854057   13008 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710762368
	I0318 11:46:12.978087   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 11:46:08 UTC 2024
	
	I0318 11:46:12.978087   13008 fix.go:236] clock set: Mon Mar 18 11:46:08 UTC 2024
	 (err=<nil>)
	I0318 11:46:12.978087   13008 start.go:83] releasing machines lock for "ha-747000", held for 2m1.0869423s
	I0318 11:46:12.978626   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:46:14.840474   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:46:14.851795   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:14.851795   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:46:17.198650   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:46:17.211374   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:17.215261   13008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 11:46:17.215398   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:46:17.227256   13008 ssh_runner.go:195] Run: cat /version.json
	I0318 11:46:17.227256   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:46:19.369466   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:46:19.369466   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:19.369466   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:46:19.380992   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:46:19.380992   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:19.380992   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:46:21.933180   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:46:21.933180   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:21.945198   13008 sshutil.go:53] new ssh client: &{IP:172.30.135.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\id_rsa Username:docker}
	I0318 11:46:21.963910   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:46:21.964898   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:46:21.964995   13008 sshutil.go:53] new ssh client: &{IP:172.30.135.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\id_rsa Username:docker}
	I0318 11:46:22.123390   13008 ssh_runner.go:235] Completed: cat /version.json: (4.896097s)
	I0318 11:46:22.123390   13008 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9080232s)
	I0318 11:46:22.140172   13008 ssh_runner.go:195] Run: systemctl --version
	I0318 11:46:22.159231   13008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 11:46:22.170015   13008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 11:46:22.183662   13008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 11:46:22.205768   13008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 11:46:22.205768   13008 start.go:494] detecting cgroup driver to use...
	I0318 11:46:22.208933   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:46:22.249951   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 11:46:22.285649   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 11:46:22.302648   13008 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 11:46:22.313875   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 11:46:22.347924   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:46:22.378024   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 11:46:22.405948   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:46:22.434194   13008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 11:46:22.464641   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 11:46:22.491983   13008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 11:46:22.520957   13008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 11:46:22.551613   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:46:22.727855   13008 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 11:46:22.757512   13008 start.go:494] detecting cgroup driver to use...
	I0318 11:46:22.769010   13008 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 11:46:22.801357   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:46:22.833299   13008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 11:46:22.880005   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:46:22.915002   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:46:22.945885   13008 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 11:46:23.011251   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:46:23.030827   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:46:23.074737   13008 ssh_runner.go:195] Run: which cri-dockerd
	I0318 11:46:23.093090   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 11:46:23.108462   13008 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 11:46:23.149215   13008 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 11:46:23.330658   13008 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 11:46:23.481933   13008 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 11:46:23.482045   13008 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 11:46:23.520424   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:46:23.671160   13008 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 11:46:26.123977   13008 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4505835s)
	I0318 11:46:26.138369   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 11:46:26.169524   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:46:26.205440   13008 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 11:46:26.367415   13008 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 11:46:26.547310   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:46:26.704491   13008 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 11:46:26.741226   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:46:26.772981   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:46:26.945720   13008 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 11:46:27.037532   13008 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 11:46:27.048854   13008 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 11:46:27.059397   13008 start.go:562] Will wait 60s for crictl version
	I0318 11:46:27.070692   13008 ssh_runner.go:195] Run: which crictl
	I0318 11:46:27.090177   13008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 11:46:27.152133   13008 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 11:46:27.163143   13008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:46:27.211966   13008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:46:27.242452   13008 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 11:46:27.243048   13008 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 11:46:27.247444   13008 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 11:46:27.247444   13008 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 11:46:27.247444   13008 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 11:46:27.247444   13008 ip.go:207] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:93:df:b5 Flags:up|broadcast|multicast|running}
	I0318 11:46:27.249700   13008 ip.go:210] interface addr: fe80::572b:6b1d:9130:e88b/64
	I0318 11:46:27.250778   13008 ip.go:210] interface addr: 172.30.128.1/20
	I0318 11:46:27.262066   13008 ssh_runner.go:195] Run: grep 172.30.128.1	host.minikube.internal$ /etc/hosts
	I0318 11:46:27.262851   13008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:46:27.300758   13008 kubeadm.go:877] updating cluster {Name:ha-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:ha-747000 Namespace:default APIServerHAVIP:172.30.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.135.65 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 11:46:27.300758   13008 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:46:27.306184   13008 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 11:46:27.333416   13008 docker.go:685] Got preloaded images: 
	I0318 11:46:27.333506   13008 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0318 11:46:27.345750   13008 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 11:46:27.375431   13008 ssh_runner.go:195] Run: which lz4
	I0318 11:46:27.381639   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0318 11:46:27.392862   13008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 11:46:27.398844   13008 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 11:46:27.399052   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0318 11:46:29.670065   13008 docker.go:649] duration metric: took 2.2883084s to copy over tarball
	I0318 11:46:29.689407   13008 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 11:46:39.977608   13008 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (10.2881249s)
	I0318 11:46:39.977608   13008 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 11:46:40.047050   13008 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 11:46:40.065718   13008 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0318 11:46:40.105195   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:46:40.288361   13008 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 11:46:43.428707   13008 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1403226s)
	I0318 11:46:43.440165   13008 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 11:46:43.463294   13008 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 11:46:43.463294   13008 cache_images.go:84] Images are preloaded, skipping loading
	I0318 11:46:43.463294   13008 kubeadm.go:928] updating node { 172.30.135.65 8443 v1.28.4 docker true true} ...
	I0318 11:46:43.463294   13008 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-747000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.135.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-747000 Namespace:default APIServerHAVIP:172.30.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 11:46:43.475511   13008 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 11:46:43.510039   13008 cni.go:84] Creating CNI manager for ""
	I0318 11:46:43.510039   13008 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 11:46:43.510039   13008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 11:46:43.510039   13008 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.30.135.65 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-747000 NodeName:ha-747000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.30.135.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.30.135.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 11:46:43.510039   13008 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.30.135.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-747000"
	  kubeletExtraArgs:
	    node-ip: 172.30.135.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.30.135.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 11:46:43.510039   13008 kube-vip.go:111] generating kube-vip config ...
	I0318 11:46:43.521535   13008 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 11:46:43.544280   13008 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 11:46:43.546837   13008 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.30.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 11:46:43.558214   13008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 11:46:43.571000   13008 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 11:46:43.583638   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0318 11:46:43.601075   13008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0318 11:46:43.628138   13008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 11:46:43.653641   13008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0318 11:46:43.678284   13008 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 11:46:43.716355   13008 ssh_runner.go:195] Run: grep 172.30.143.254	control-plane.minikube.internal$ /etc/hosts
	I0318 11:46:43.718897   13008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:46:43.752058   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:46:43.902848   13008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:46:43.928902   13008 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000 for IP: 172.30.135.65
	I0318 11:46:43.928960   13008 certs.go:194] generating shared ca certs ...
	I0318 11:46:43.929033   13008 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:46:43.929587   13008 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0318 11:46:43.930209   13008 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0318 11:46:43.930536   13008 certs.go:256] generating profile certs ...
	I0318 11:46:43.931127   13008 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\client.key
	I0318 11:46:43.931127   13008 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\client.crt with IP's: []
	I0318 11:46:44.147957   13008 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\client.crt ...
	I0318 11:46:44.147957   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\client.crt: {Name:mkf0926a62a72687e3478d66ecfd2d91c0286649 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:46:44.152355   13008 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\client.key ...
	I0318 11:46:44.152355   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\client.key: {Name:mk2825fe1c54fbd811fb2ec7fcf9ab33d1c62f84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:46:44.154003   13008 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.8170f993
	I0318 11:46:44.155189   13008 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.8170f993 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.30.135.65 172.30.143.254]
	I0318 11:46:44.437644   13008 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.8170f993 ...
	I0318 11:46:44.437644   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.8170f993: {Name:mkcc8019f72c1787cfa5e3023e126ce259846293 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:46:44.439558   13008 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.8170f993 ...
	I0318 11:46:44.439558   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.8170f993: {Name:mk57b087c93ad949bc4139008701ca0f8eb1f41e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:46:44.440918   13008 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.8170f993 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt
	I0318 11:46:44.448110   13008 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.8170f993 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key
	I0318 11:46:44.452045   13008 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key
	I0318 11:46:44.452045   13008 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.crt with IP's: []
	I0318 11:46:44.609396   13008 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.crt ...
	I0318 11:46:44.609396   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.crt: {Name:mk4475051e95a4a9524d668d10fde55f3898dcd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:46:44.612089   13008 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key ...
	I0318 11:46:44.612089   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key: {Name:mkb9511d07ddcce77bfc37ffba5d0139c7c0cdb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:46:44.613465   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 11:46:44.614480   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 11:46:44.614656   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 11:46:44.614656   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 11:46:44.614656   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 11:46:44.614656   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 11:46:44.615223   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 11:46:44.619704   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 11:46:44.630171   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem (1338 bytes)
	W0318 11:46:44.630171   13008 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424_empty.pem, impossibly tiny 0 bytes
	I0318 11:46:44.630171   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0318 11:46:44.630910   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 11:46:44.630910   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 11:46:44.630910   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 11:46:44.631920   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem (1708 bytes)
	I0318 11:46:44.632269   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem -> /usr/share/ca-certificates/13424.pem
	I0318 11:46:44.632269   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /usr/share/ca-certificates/134242.pem
	I0318 11:46:44.632269   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:46:44.633986   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 11:46:44.672830   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0318 11:46:44.709674   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 11:46:44.750028   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 11:46:44.788584   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 11:46:44.827021   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 11:46:44.864525   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 11:46:44.900890   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 11:46:44.936552   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem --> /usr/share/ca-certificates/13424.pem (1338 bytes)
	I0318 11:46:44.974197   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /usr/share/ca-certificates/134242.pem (1708 bytes)
	I0318 11:46:45.007962   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 11:46:45.056567   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 11:46:45.104189   13008 ssh_runner.go:195] Run: openssl version
	I0318 11:46:45.125917   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134242.pem && ln -fs /usr/share/ca-certificates/134242.pem /etc/ssl/certs/134242.pem"
	I0318 11:46:45.157407   13008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134242.pem
	I0318 11:46:45.165942   13008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 11:25 /usr/share/ca-certificates/134242.pem
	I0318 11:46:45.178153   13008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134242.pem
	I0318 11:46:45.198094   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134242.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 11:46:45.227910   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 11:46:45.254850   13008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:46:45.257890   13008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 11:07 /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:46:45.263587   13008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:46:45.290539   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 11:46:45.321016   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13424.pem && ln -fs /usr/share/ca-certificates/13424.pem /etc/ssl/certs/13424.pem"
	I0318 11:46:45.350958   13008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13424.pem
	I0318 11:46:45.359394   13008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 11:25 /usr/share/ca-certificates/13424.pem
	I0318 11:46:45.372307   13008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13424.pem
	I0318 11:46:45.393477   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13424.pem /etc/ssl/certs/51391683.0"
	I0318 11:46:45.421369   13008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 11:46:45.426798   13008 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 11:46:45.426798   13008 kubeadm.go:391] StartCluster: {Name:ha-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clu
sterName:ha-747000 Namespace:default APIServerHAVIP:172.30.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.135.65 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:46:45.437183   13008 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 11:46:45.472976   13008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 11:46:45.498331   13008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 11:46:45.525240   13008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 11:46:45.540517   13008 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 11:46:45.540517   13008 kubeadm.go:156] found existing configuration files:
	
	I0318 11:46:45.551001   13008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 11:46:45.565933   13008 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 11:46:45.575791   13008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 11:46:45.604680   13008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 11:46:45.618892   13008 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 11:46:45.630737   13008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 11:46:45.656630   13008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 11:46:45.670944   13008 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 11:46:45.682841   13008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 11:46:45.711131   13008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 11:46:45.712984   13008 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 11:46:45.740301   13008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 11:46:45.755587   13008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 11:46:46.166066   13008 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 11:46:58.721146   13008 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 11:46:58.721352   13008 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 11:46:58.721597   13008 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 11:46:58.721844   13008 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 11:46:58.722194   13008 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 11:46:58.722353   13008 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 11:46:58.728031   13008 out.go:204]   - Generating certificates and keys ...
	I0318 11:46:58.728587   13008 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 11:46:58.728721   13008 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 11:46:58.728721   13008 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 11:46:58.728721   13008 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 11:46:58.729264   13008 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 11:46:58.729376   13008 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 11:46:58.729376   13008 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 11:46:58.729376   13008 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-747000 localhost] and IPs [172.30.135.65 127.0.0.1 ::1]
	I0318 11:46:58.729934   13008 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 11:46:58.729989   13008 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-747000 localhost] and IPs [172.30.135.65 127.0.0.1 ::1]
	I0318 11:46:58.729989   13008 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 11:46:58.729989   13008 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 11:46:58.729989   13008 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 11:46:58.729989   13008 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 11:46:58.729989   13008 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 11:46:58.729989   13008 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 11:46:58.729989   13008 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 11:46:58.731072   13008 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 11:46:58.731204   13008 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 11:46:58.731204   13008 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 11:46:58.733932   13008 out.go:204]   - Booting up control plane ...
	I0318 11:46:58.733932   13008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 11:46:58.733932   13008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 11:46:58.733932   13008 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 11:46:58.733932   13008 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 11:46:58.733932   13008 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 11:46:58.735049   13008 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 11:46:58.735049   13008 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 11:46:58.735605   13008 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.584491 seconds
	I0318 11:46:58.735738   13008 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 11:46:58.735738   13008 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 11:46:58.735738   13008 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 11:46:58.735738   13008 kubeadm.go:309] [mark-control-plane] Marking the node ha-747000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 11:46:58.735738   13008 kubeadm.go:309] [bootstrap-token] Using token: 9x06m0.cfr1err7b224rbom
	I0318 11:46:58.739324   13008 out.go:204]   - Configuring RBAC rules ...
	I0318 11:46:58.739324   13008 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 11:46:58.739324   13008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 11:46:58.739324   13008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 11:46:58.740942   13008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 11:46:58.741015   13008 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 11:46:58.741015   13008 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 11:46:58.741015   13008 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 11:46:58.741015   13008 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 11:46:58.741015   13008 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 11:46:58.741015   13008 kubeadm.go:309] 
	I0318 11:46:58.741015   13008 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 11:46:58.741015   13008 kubeadm.go:309] 
	I0318 11:46:58.741015   13008 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 11:46:58.741015   13008 kubeadm.go:309] 
	I0318 11:46:58.741015   13008 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 11:46:58.741015   13008 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 11:46:58.741015   13008 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 11:46:58.741015   13008 kubeadm.go:309] 
	I0318 11:46:58.741015   13008 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 11:46:58.741015   13008 kubeadm.go:309] 
	I0318 11:46:58.741015   13008 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 11:46:58.741015   13008 kubeadm.go:309] 
	I0318 11:46:58.741015   13008 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 11:46:58.741015   13008 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 11:46:58.741015   13008 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 11:46:58.741015   13008 kubeadm.go:309] 
	I0318 11:46:58.741015   13008 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 11:46:58.741015   13008 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 11:46:58.741015   13008 kubeadm.go:309] 
	I0318 11:46:58.741015   13008 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 9x06m0.cfr1err7b224rbom \
	I0318 11:46:58.741015   13008 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 \
	I0318 11:46:58.741015   13008 kubeadm.go:309] 	--control-plane 
	I0318 11:46:58.741015   13008 kubeadm.go:309] 
	I0318 11:46:58.741015   13008 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 11:46:58.741015   13008 kubeadm.go:309] 
	I0318 11:46:58.741015   13008 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 9x06m0.cfr1err7b224rbom \
	I0318 11:46:58.741015   13008 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 
	I0318 11:46:58.741015   13008 cni.go:84] Creating CNI manager for ""
	I0318 11:46:58.741015   13008 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 11:46:58.747645   13008 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0318 11:46:58.765701   13008 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0318 11:46:58.777338   13008 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0318 11:46:58.777403   13008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0318 11:46:58.839563   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0318 11:46:59.996446   13008 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.156875s)
	I0318 11:46:59.996677   13008 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 11:47:00.012090   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-747000 minikube.k8s.io/updated_at=2024_03_18T11_46_59_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=ha-747000 minikube.k8s.io/primary=true
	I0318 11:47:00.013236   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:00.043196   13008 ops.go:34] apiserver oom_adj: -16
	I0318 11:47:00.267514   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:00.777611   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:01.267098   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:01.768841   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:02.272827   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:02.789245   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:03.278739   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:03.771564   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:04.273472   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:04.773277   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:05.265686   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:05.782806   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:06.264657   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:06.770106   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:07.279524   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:07.773161   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:08.265477   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:08.765239   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:09.274047   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:09.771743   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:10.295450   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:10.770812   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:11.278601   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:11.773069   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 11:47:11.948247   13008 kubeadm.go:1107] duration metric: took 11.9514063s to wait for elevateKubeSystemPrivileges
	W0318 11:47:11.948247   13008 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 11:47:11.948247   13008 kubeadm.go:393] duration metric: took 26.5212528s to StartCluster
	I0318 11:47:11.948247   13008 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:47:11.948247   13008 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 11:47:11.950074   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:47:11.951355   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 11:47:11.951355   13008 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 11:47:11.951355   13008 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.30.135.65 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:47:11.951355   13008 start.go:240] waiting for startup goroutines ...
	I0318 11:47:11.951355   13008 addons.go:69] Setting storage-provisioner=true in profile "ha-747000"
	I0318 11:47:11.951355   13008 addons.go:234] Setting addon storage-provisioner=true in "ha-747000"
	I0318 11:47:11.951355   13008 host.go:66] Checking if "ha-747000" exists ...
	I0318 11:47:11.951355   13008 addons.go:69] Setting default-storageclass=true in profile "ha-747000"
	I0318 11:47:11.951355   13008 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:47:11.951355   13008 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-747000"
	I0318 11:47:11.952885   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:47:11.952885   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:47:12.100580   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.30.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 11:47:12.791374   13008 start.go:948] {"host.minikube.internal": 172.30.128.1} host record injected into CoreDNS's ConfigMap
	I0318 11:47:14.185805   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:47:14.191094   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:14.192015   13008 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 11:47:14.192783   13008 kapi.go:59] client config for ha-747000: &rest.Config{Host:"https://172.30.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-747000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-747000\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e8b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 11:47:14.194231   13008 cert_rotation.go:137] Starting client certificate rotation controller
	I0318 11:47:14.194283   13008 addons.go:234] Setting addon default-storageclass=true in "ha-747000"
	I0318 11:47:14.194283   13008 host.go:66] Checking if "ha-747000" exists ...
	I0318 11:47:14.195859   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:47:14.424311   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:47:14.431023   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:14.435126   13008 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 11:47:14.437925   13008 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 11:47:14.437925   13008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 11:47:14.437925   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:47:16.289811   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:47:16.299233   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:16.299297   13008 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 11:47:16.299297   13008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 11:47:16.299297   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:47:16.521709   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:47:16.521709   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:16.534246   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:47:18.454594   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:47:18.454594   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:18.456008   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:47:19.192429   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:47:19.192429   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:19.192429   13008 sshutil.go:53] new ssh client: &{IP:172.30.135.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\id_rsa Username:docker}
	I0318 11:47:19.322928   13008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 11:47:20.988781   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:47:20.999854   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:21.000055   13008 sshutil.go:53] new ssh client: &{IP:172.30.135.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\id_rsa Username:docker}
	I0318 11:47:21.128921   13008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 11:47:21.370845   13008 round_trippers.go:463] GET https://172.30.143.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0318 11:47:21.370931   13008 round_trippers.go:469] Request Headers:
	I0318 11:47:21.370931   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:47:21.371016   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:47:21.386557   13008 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0318 11:47:21.388813   13008 round_trippers.go:463] PUT https://172.30.143.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0318 11:47:21.388856   13008 round_trippers.go:469] Request Headers:
	I0318 11:47:21.388903   13008 round_trippers.go:473]     Content-Type: application/json
	I0318 11:47:21.388903   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:47:21.388964   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:47:21.396207   13008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:47:21.401037   13008 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0318 11:47:21.403388   13008 addons.go:505] duration metric: took 9.4519636s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0318 11:47:21.403966   13008 start.go:245] waiting for cluster config update ...
	I0318 11:47:21.403966   13008 start.go:254] writing updated cluster config ...
	I0318 11:47:21.406330   13008 out.go:177] 
	I0318 11:47:21.418274   13008 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:47:21.418274   13008 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json ...
	I0318 11:47:21.419043   13008 out.go:177] * Starting "ha-747000-m02" control-plane node in "ha-747000" cluster
	I0318 11:47:21.424695   13008 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:47:21.424695   13008 cache.go:56] Caching tarball of preloaded images
	I0318 11:47:21.427674   13008 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 11:47:21.427937   13008 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 11:47:21.427937   13008 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json ...
	I0318 11:47:21.430331   13008 start.go:360] acquireMachinesLock for ha-747000-m02: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 11:47:21.432041   13008 start.go:364] duration metric: took 1.7102ms to acquireMachinesLock for "ha-747000-m02"
	I0318 11:47:21.432238   13008 start.go:93] Provisioning new machine with config: &{Name:ha-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-747000 Namespace:default APIServerHAVIP:172.30.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.135.65 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:47:21.432238   13008 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0318 11:47:21.433784   13008 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 11:47:21.433784   13008 start.go:159] libmachine.API.Create for "ha-747000" (driver="hyperv")
	I0318 11:47:21.433784   13008 client.go:168] LocalClient.Create starting
	I0318 11:47:21.433784   13008 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0318 11:47:21.439122   13008 main.go:141] libmachine: Decoding PEM data...
	I0318 11:47:21.439122   13008 main.go:141] libmachine: Parsing certificate...
	I0318 11:47:21.439313   13008 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0318 11:47:21.439562   13008 main.go:141] libmachine: Decoding PEM data...
	I0318 11:47:21.439562   13008 main.go:141] libmachine: Parsing certificate...
	I0318 11:47:21.439794   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0318 11:47:23.241743   13008 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0318 11:47:23.241743   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:23.245380   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0318 11:47:24.909850   13008 main.go:141] libmachine: [stdout =====>] : False
	
	I0318 11:47:24.909850   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:24.917979   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:47:26.312968   13008 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:47:26.320652   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:26.320652   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:47:29.610445   13008 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:47:29.621431   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:29.623610   13008 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 11:47:30.021927   13008 main.go:141] libmachine: Creating SSH key...
	I0318 11:47:30.119601   13008 main.go:141] libmachine: Creating VM...
	I0318 11:47:30.119601   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:47:32.829630   13008 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:47:32.829630   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:32.841311   13008 main.go:141] libmachine: Using switch "Default Switch"
	I0318 11:47:32.841462   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:47:34.456936   13008 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:47:34.456936   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:34.456936   13008 main.go:141] libmachine: Creating VHD
	I0318 11:47:34.465658   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0318 11:47:37.929435   13008 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 85777A93-61BC-4A57-AF64-27F6DBAE651E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0318 11:47:37.939825   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:37.939825   13008 main.go:141] libmachine: Writing magic tar header
	I0318 11:47:37.939928   13008 main.go:141] libmachine: Writing SSH key tar header
	I0318 11:47:37.947351   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0318 11:47:40.935326   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:47:40.935326   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:40.935326   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\disk.vhd' -SizeBytes 20000MB
	I0318 11:47:43.287330   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:47:43.297272   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:43.297272   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-747000-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0318 11:47:46.626949   13008 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-747000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0318 11:47:46.626949   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:46.626949   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-747000-m02 -DynamicMemoryEnabled $false
	I0318 11:47:48.663588   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:47:48.673992   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:48.673992   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-747000-m02 -Count 2
	I0318 11:47:50.655418   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:47:50.655418   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:50.655418   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-747000-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\boot2docker.iso'
	I0318 11:47:53.022424   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:47:53.032384   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:53.032384   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-747000-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\disk.vhd'
	I0318 11:47:55.436382   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:47:55.447458   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:55.447458   13008 main.go:141] libmachine: Starting VM...
	I0318 11:47:55.447554   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-747000-m02
	I0318 11:47:58.279520   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:47:58.279520   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:47:58.279520   13008 main.go:141] libmachine: Waiting for host to start...
	I0318 11:47:58.279520   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:00.418030   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:00.418030   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:00.427738   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:02.893786   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:48:02.893880   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:03.901418   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:05.971486   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:05.972349   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:05.972477   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:08.337044   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:48:08.348388   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:09.357755   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:11.366189   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:11.366189   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:11.366189   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:13.807391   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:48:13.815174   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:14.816763   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:16.942187   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:16.942249   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:16.942317   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:19.365653   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:48:19.365653   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:20.390660   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:22.498634   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:22.510431   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:22.510431   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:24.898503   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:48:24.898646   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:24.898851   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:26.863828   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:26.863828   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:26.863828   13008 machine.go:94] provisionDockerMachine start ...
	I0318 11:48:26.863828   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:28.876801   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:28.887527   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:28.887527   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:31.170180   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:48:31.170180   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:31.176324   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:48:31.186067   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.142.66 22 <nil> <nil>}
	I0318 11:48:31.186067   13008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 11:48:31.314744   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 11:48:31.314744   13008 buildroot.go:166] provisioning hostname "ha-747000-m02"
	I0318 11:48:31.314744   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:33.296740   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:33.307821   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:33.307821   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:35.665966   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:48:35.676076   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:35.681418   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:48:35.681906   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.142.66 22 <nil> <nil>}
	I0318 11:48:35.682036   13008 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-747000-m02 && echo "ha-747000-m02" | sudo tee /etc/hostname
	I0318 11:48:35.834894   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-747000-m02
	
	I0318 11:48:35.834969   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:37.742111   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:37.753300   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:37.753300   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:40.051262   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:48:40.051262   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:40.057320   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:48:40.057320   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.142.66 22 <nil> <nil>}
	I0318 11:48:40.057320   13008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-747000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-747000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-747000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 11:48:40.210357   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 11:48:40.210357   13008 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0318 11:48:40.210357   13008 buildroot.go:174] setting up certificates
	I0318 11:48:40.210357   13008 provision.go:84] configureAuth start
	I0318 11:48:40.210357   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:42.143728   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:42.143728   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:42.153928   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:44.466152   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:48:44.466217   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:44.466273   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:46.400086   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:46.400086   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:46.400086   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:48.741017   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:48:48.741017   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:48.741017   13008 provision.go:143] copyHostCerts
	I0318 11:48:48.741017   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0318 11:48:48.741017   13008 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0318 11:48:48.741017   13008 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0318 11:48:48.741857   13008 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 11:48:48.742578   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0318 11:48:48.743194   13008 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0318 11:48:48.743194   13008 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0318 11:48:48.743194   13008 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 11:48:48.744449   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0318 11:48:48.744449   13008 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0318 11:48:48.744449   13008 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0318 11:48:48.745111   13008 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 11:48:48.745979   13008 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-747000-m02 san=[127.0.0.1 172.30.142.66 ha-747000-m02 localhost minikube]
	I0318 11:48:48.933220   13008 provision.go:177] copyRemoteCerts
	I0318 11:48:48.949629   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 11:48:48.950236   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:50.876492   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:50.876492   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:50.876492   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:53.206803   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:48:53.217963   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:53.217963   13008 sshutil.go:53] new ssh client: &{IP:172.30.142.66 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\id_rsa Username:docker}
	I0318 11:48:53.320939   13008 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3711978s)
	I0318 11:48:53.320939   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 11:48:53.321210   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 11:48:53.361596   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 11:48:53.361596   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 11:48:53.402212   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 11:48:53.402212   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 11:48:53.443056   13008 provision.go:87] duration metric: took 13.2326002s to configureAuth
	I0318 11:48:53.443056   13008 buildroot.go:189] setting minikube options for container-runtime
	I0318 11:48:53.443602   13008 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:48:53.443602   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:55.383567   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:55.383567   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:55.383567   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:48:57.688551   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:48:57.698478   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:57.704346   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:48:57.704922   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.142.66 22 <nil> <nil>}
	I0318 11:48:57.704922   13008 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 11:48:57.835932   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 11:48:57.835932   13008 buildroot.go:70] root file system type: tmpfs
	I0318 11:48:57.835932   13008 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 11:48:57.836468   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:48:59.788249   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:48:59.799303   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:48:59.799303   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:02.095164   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:49:02.106334   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:02.111961   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:49:02.112585   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.142.66 22 <nil> <nil>}
	I0318 11:49:02.112585   13008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.30.135.65"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 11:49:02.267694   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.30.135.65
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 11:49:02.267799   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:49:04.200510   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:04.200510   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:04.200510   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:06.507794   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:49:06.518671   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:06.523731   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:49:06.524338   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.142.66 22 <nil> <nil>}
	I0318 11:49:06.524338   13008 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 11:49:08.574979   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 11:49:08.574979   13008 machine.go:97] duration metric: took 41.7108407s to provisionDockerMachine
	I0318 11:49:08.574979   13008 client.go:171] duration metric: took 1m47.1404004s to LocalClient.Create
	I0318 11:49:08.574979   13008 start.go:167] duration metric: took 1m47.1404004s to libmachine.API.Create "ha-747000"
	I0318 11:49:08.574979   13008 start.go:293] postStartSetup for "ha-747000-m02" (driver="hyperv")
	I0318 11:49:08.574979   13008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 11:49:08.587005   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 11:49:08.587005   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:49:10.545043   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:10.556151   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:10.556225   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:12.868871   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:49:12.878662   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:12.878662   13008 sshutil.go:53] new ssh client: &{IP:172.30.142.66 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\id_rsa Username:docker}
	I0318 11:49:12.985497   13008 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3984601s)
	I0318 11:49:12.998104   13008 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 11:49:13.003847   13008 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 11:49:13.003918   13008 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0318 11:49:13.004344   13008 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0318 11:49:13.005467   13008 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> 134242.pem in /etc/ssl/certs
	I0318 11:49:13.005467   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /etc/ssl/certs/134242.pem
	I0318 11:49:13.015496   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 11:49:13.034173   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /etc/ssl/certs/134242.pem (1708 bytes)
	I0318 11:49:13.072875   13008 start.go:296] duration metric: took 4.4978625s for postStartSetup
	I0318 11:49:13.075108   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:49:14.963268   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:14.963268   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:14.973813   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:17.304030   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:49:17.304030   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:17.304030   13008 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json ...
	I0318 11:49:17.306870   13008 start.go:128] duration metric: took 1m55.8737728s to createHost
	I0318 11:49:17.306939   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:49:19.237158   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:19.248098   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:19.248098   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:21.621731   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:49:21.621731   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:21.627521   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:49:21.628029   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.142.66 22 <nil> <nil>}
	I0318 11:49:21.628078   13008 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 11:49:21.758303   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710762561.749231924
	
	I0318 11:49:21.758303   13008 fix.go:216] guest clock: 1710762561.749231924
	I0318 11:49:21.758303   13008 fix.go:229] Guest: 2024-03-18 11:49:21.749231924 +0000 UTC Remote: 2024-03-18 11:49:17.306939 +0000 UTC m=+310.428658701 (delta=4.442292924s)
	I0318 11:49:21.758303   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:49:23.680216   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:23.680216   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:23.680216   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:26.054466   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:49:26.054466   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:26.059768   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:49:26.059768   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.142.66 22 <nil> <nil>}
	I0318 11:49:26.059768   13008 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710762561
	I0318 11:49:26.199954   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 11:49:21 UTC 2024
	
	I0318 11:49:26.199954   13008 fix.go:236] clock set: Mon Mar 18 11:49:21 UTC 2024
	 (err=<nil>)
	I0318 11:49:26.199954   13008 start.go:83] releasing machines lock for "ha-747000-m02", held for 2m4.7669278s
	I0318 11:49:26.200494   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:49:28.185140   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:28.195956   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:28.195956   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:30.534919   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:49:30.534919   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:30.539671   13008 out.go:177] * Found network options:
	I0318 11:49:30.542325   13008 out.go:177]   - NO_PROXY=172.30.135.65
	W0318 11:49:30.545859   13008 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 11:49:30.548115   13008 out.go:177]   - NO_PROXY=172.30.135.65
	W0318 11:49:30.550771   13008 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 11:49:30.552346   13008 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 11:49:30.554165   13008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 11:49:30.554165   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:49:30.558893   13008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 11:49:30.558893   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 11:49:32.640676   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:32.640774   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:32.640877   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:32.655434   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:32.655434   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:32.666019   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:35.152622   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:49:35.152688   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:35.152688   13008 sshutil.go:53] new ssh client: &{IP:172.30.142.66 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\id_rsa Username:docker}
	I0318 11:49:35.175099   13008 main.go:141] libmachine: [stdout =====>] : 172.30.142.66
	
	I0318 11:49:35.175970   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:35.176131   13008 sshutil.go:53] new ssh client: &{IP:172.30.142.66 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m02\id_rsa Username:docker}
	I0318 11:49:35.254529   13008 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.6956007s)
	W0318 11:49:35.254529   13008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 11:49:35.264883   13008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 11:49:35.340182   13008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 11:49:35.340260   13008 start.go:494] detecting cgroup driver to use...
	I0318 11:49:35.340260   13008 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7860594s)
	I0318 11:49:35.340317   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:49:35.383616   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 11:49:35.410552   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 11:49:35.430523   13008 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 11:49:35.442039   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 11:49:35.475005   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:49:35.503043   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 11:49:35.530578   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:49:35.558167   13008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 11:49:35.587243   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 11:49:35.614117   13008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 11:49:35.642838   13008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 11:49:35.668763   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:49:35.846712   13008 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 11:49:35.868283   13008 start.go:494] detecting cgroup driver to use...
	I0318 11:49:35.886281   13008 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 11:49:35.922058   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:49:35.957657   13008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 11:49:35.991279   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:49:36.020899   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:49:36.052135   13008 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 11:49:36.109553   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:49:36.129370   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:49:36.172907   13008 ssh_runner.go:195] Run: which cri-dockerd
	I0318 11:49:36.192183   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 11:49:36.209421   13008 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 11:49:36.248821   13008 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 11:49:36.432899   13008 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 11:49:36.594198   13008 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 11:49:36.594315   13008 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 11:49:36.634355   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:49:36.812595   13008 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 11:49:39.267567   13008 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4549536s)
	I0318 11:49:39.278897   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 11:49:39.312980   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:49:39.344395   13008 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 11:49:39.525145   13008 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 11:49:39.704346   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:49:39.880091   13008 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 11:49:39.919931   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:49:39.952216   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:49:40.132668   13008 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 11:49:40.222207   13008 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 11:49:40.234538   13008 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 11:49:40.242482   13008 start.go:562] Will wait 60s for crictl version
	I0318 11:49:40.255640   13008 ssh_runner.go:195] Run: which crictl
	I0318 11:49:40.272215   13008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 11:49:40.334895   13008 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 11:49:40.344173   13008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:49:40.384018   13008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:49:40.414791   13008 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 11:49:40.418697   13008 out.go:177]   - env NO_PROXY=172.30.135.65
	I0318 11:49:40.423153   13008 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 11:49:40.423981   13008 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 11:49:40.423981   13008 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 11:49:40.423981   13008 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 11:49:40.423981   13008 ip.go:207] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:93:df:b5 Flags:up|broadcast|multicast|running}
	I0318 11:49:40.428649   13008 ip.go:210] interface addr: fe80::572b:6b1d:9130:e88b/64
	I0318 11:49:40.430755   13008 ip.go:210] interface addr: 172.30.128.1/20
	I0318 11:49:40.441310   13008 ssh_runner.go:195] Run: grep 172.30.128.1	host.minikube.internal$ /etc/hosts
	I0318 11:49:40.447624   13008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:49:40.467778   13008 mustload.go:65] Loading cluster: ha-747000
	I0318 11:49:40.467963   13008 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:49:40.469296   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:49:42.394146   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:42.394146   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:42.404346   13008 host.go:66] Checking if "ha-747000" exists ...
	I0318 11:49:42.405092   13008 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000 for IP: 172.30.142.66
	I0318 11:49:42.405176   13008 certs.go:194] generating shared ca certs ...
	I0318 11:49:42.405212   13008 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:49:42.405897   13008 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0318 11:49:42.405962   13008 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0318 11:49:42.405962   13008 certs.go:256] generating profile certs ...
	I0318 11:49:42.406648   13008 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\client.key
	I0318 11:49:42.406648   13008 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.844f64ff
	I0318 11:49:42.407216   13008 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.844f64ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.30.135.65 172.30.142.66 172.30.143.254]
	I0318 11:49:42.591721   13008 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.844f64ff ...
	I0318 11:49:42.591721   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.844f64ff: {Name:mka0c3023f55cd67808646a89c803406b9c3a603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:49:42.593983   13008 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.844f64ff ...
	I0318 11:49:42.593983   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.844f64ff: {Name:mk5c488359d1da51964d69987f1d7686b43c00ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:49:42.595244   13008 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.844f64ff -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt
	I0318 11:49:42.601951   13008 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.844f64ff -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key
	I0318 11:49:42.607583   13008 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key
	I0318 11:49:42.607583   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 11:49:42.607583   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 11:49:42.608906   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 11:49:42.609073   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 11:49:42.609280   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 11:49:42.609407   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 11:49:42.609547   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 11:49:42.609739   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 11:49:42.609895   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem (1338 bytes)
	W0318 11:49:42.609895   13008 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424_empty.pem, impossibly tiny 0 bytes
	I0318 11:49:42.610492   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0318 11:49:42.610694   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 11:49:42.610890   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 11:49:42.611135   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 11:49:42.611344   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem (1708 bytes)
	I0318 11:49:42.611344   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /usr/share/ca-certificates/134242.pem
	I0318 11:49:42.611344   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:49:42.611344   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem -> /usr/share/ca-certificates/13424.pem
	I0318 11:49:42.611344   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:49:44.564835   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:44.564835   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:44.564990   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:46.873791   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:49:46.873860   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:46.873860   13008 sshutil.go:53] new ssh client: &{IP:172.30.135.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\id_rsa Username:docker}
	I0318 11:49:46.967932   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0318 11:49:46.977081   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0318 11:49:47.005206   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0318 11:49:47.012342   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0318 11:49:47.039854   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0318 11:49:47.047533   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0318 11:49:47.077538   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0318 11:49:47.087307   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0318 11:49:47.118861   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0318 11:49:47.124469   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0318 11:49:47.151958   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0318 11:49:47.158010   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0318 11:49:47.175620   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 11:49:47.217430   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0318 11:49:47.250496   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 11:49:47.293863   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 11:49:47.332484   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0318 11:49:47.369699   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 11:49:47.414479   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 11:49:47.453031   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 11:49:47.483941   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /usr/share/ca-certificates/134242.pem (1708 bytes)
	I0318 11:49:47.530332   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 11:49:47.568018   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem --> /usr/share/ca-certificates/13424.pem (1338 bytes)
	I0318 11:49:47.608147   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0318 11:49:47.634776   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0318 11:49:47.661028   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0318 11:49:47.686896   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0318 11:49:47.714765   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0318 11:49:47.745317   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0318 11:49:47.771286   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0318 11:49:47.809263   13008 ssh_runner.go:195] Run: openssl version
	I0318 11:49:47.828992   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 11:49:47.856917   13008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:49:47.862807   13008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 11:07 /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:49:47.873148   13008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:49:47.892479   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 11:49:47.922923   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13424.pem && ln -fs /usr/share/ca-certificates/13424.pem /etc/ssl/certs/13424.pem"
	I0318 11:49:47.949932   13008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13424.pem
	I0318 11:49:47.956579   13008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 11:25 /usr/share/ca-certificates/13424.pem
	I0318 11:49:47.967745   13008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13424.pem
	I0318 11:49:47.989110   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13424.pem /etc/ssl/certs/51391683.0"
	I0318 11:49:48.018909   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134242.pem && ln -fs /usr/share/ca-certificates/134242.pem /etc/ssl/certs/134242.pem"
	I0318 11:49:48.046501   13008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134242.pem
	I0318 11:49:48.053603   13008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 11:25 /usr/share/ca-certificates/134242.pem
	I0318 11:49:48.065518   13008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134242.pem
	I0318 11:49:48.082933   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134242.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 11:49:48.114121   13008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 11:49:48.121611   13008 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 11:49:48.121907   13008 kubeadm.go:928] updating node {m02 172.30.142.66 8443 v1.28.4 docker true true} ...
	I0318 11:49:48.122037   13008 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-747000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.142.66
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-747000 Namespace:default APIServerHAVIP:172.30.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 11:49:48.122148   13008 kube-vip.go:111] generating kube-vip config ...
	I0318 11:49:48.133390   13008 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 11:49:48.155414   13008 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 11:49:48.155538   13008 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.30.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 11:49:48.167324   13008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 11:49:48.180547   13008 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 11:49:48.193361   13008 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 11:49:48.212708   13008 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl
	I0318 11:49:48.212781   13008 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm
	I0318 11:49:48.212781   13008 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet
	I0318 11:49:49.324334   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 11:49:49.330426   13008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 11:49:49.340889   13008 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 11:49:49.344319   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 11:49:49.393348   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 11:49:49.403652   13008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 11:49:49.452787   13008 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 11:49:49.452903   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 11:49:50.014334   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 11:49:50.086806   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 11:49:50.107599   13008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 11:49:50.118607   13008 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 11:49:50.118607   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 11:49:50.895984   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0318 11:49:50.911163   13008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0318 11:49:50.944031   13008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 11:49:50.971126   13008 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 11:49:51.013861   13008 ssh_runner.go:195] Run: grep 172.30.143.254	control-plane.minikube.internal$ /etc/hosts
	I0318 11:49:51.021766   13008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:49:51.051001   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:49:51.226569   13008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:49:51.254937   13008 host.go:66] Checking if "ha-747000" exists ...
	I0318 11:49:51.255822   13008 start.go:316] joinCluster: &{Name:ha-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-747000 Namespace:default APIServerHAVIP:172.30.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.135.65 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.30.142.66 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:49:51.256096   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 11:49:51.256193   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:49:53.202549   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:49:53.202549   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:53.202721   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:49:55.537453   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:49:55.537453   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:49:55.549046   13008 sshutil.go:53] new ssh client: &{IP:172.30.135.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\id_rsa Username:docker}
	I0318 11:49:55.723791   13008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.4674449s)
	I0318 11:49:55.723791   13008 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.30.142.66 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:49:55.723791   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rv7b18.lo2thrhz9e2j77m6 --discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-747000-m02 --control-plane --apiserver-advertise-address=172.30.142.66 --apiserver-bind-port=8443"
	I0318 11:50:52.332075   13008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rv7b18.lo2thrhz9e2j77m6 --discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-747000-m02 --control-plane --apiserver-advertise-address=172.30.142.66 --apiserver-bind-port=8443": (56.6078596s)
	I0318 11:50:52.332075   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 11:50:52.913432   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-747000-m02 minikube.k8s.io/updated_at=2024_03_18T11_50_52_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=ha-747000 minikube.k8s.io/primary=false
	I0318 11:50:53.099383   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-747000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0318 11:50:53.254858   13008 start.go:318] duration metric: took 1m1.9985711s to joinCluster
	I0318 11:50:53.254858   13008 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.30.142.66 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:50:53.259065   13008 out.go:177] * Verifying Kubernetes components...
	I0318 11:50:53.256355   13008 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:50:53.273143   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:50:53.632930   13008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:50:53.681348   13008 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 11:50:53.681348   13008 kapi.go:59] client config for ha-747000: &rest.Config{Host:"https://172.30.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-747000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-747000\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e8b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0318 11:50:53.681348   13008 kubeadm.go:477] Overriding stale ClientConfig host https://172.30.143.254:8443 with https://172.30.135.65:8443
	I0318 11:50:53.683006   13008 node_ready.go:35] waiting up to 6m0s for node "ha-747000-m02" to be "Ready" ...
	I0318 11:50:53.683006   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:53.683006   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:53.683006   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:53.683006   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:53.694802   13008 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 11:50:54.198151   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:54.198151   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:54.198151   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:54.198151   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:54.205120   13008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:50:54.686086   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:54.686086   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:54.686086   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:54.686086   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:54.691369   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:50:55.196398   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:55.196485   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:55.196485   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:55.196558   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:55.197317   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:50:55.713007   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:55.713007   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:55.713007   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:55.713007   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:55.714601   13008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:50:55.729171   13008 node_ready.go:53] node "ha-747000-m02" has status "Ready":"False"
	I0318 11:50:56.192332   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:56.192332   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:56.192332   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:56.192332   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:56.192899   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:50:56.700495   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:56.700549   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:56.700549   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:56.700549   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:56.703766   13008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:50:57.185702   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:57.185702   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:57.185702   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:57.185702   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:57.186285   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:50:57.701100   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:57.701100   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:57.701100   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:57.701100   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:57.701668   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:50:58.195940   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:58.195940   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:58.195940   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:58.195940   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:58.198123   13008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:50:58.201194   13008 node_ready.go:53] node "ha-747000-m02" has status "Ready":"False"
	I0318 11:50:58.695938   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:58.695938   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:58.695938   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:58.695938   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:58.697529   13008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:50:59.209025   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:59.209025   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:59.209025   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:59.209025   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:59.211263   13008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:50:59.685270   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:50:59.685270   13008 round_trippers.go:469] Request Headers:
	I0318 11:50:59.685270   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:50:59.685270   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:50:59.711426   13008 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0318 11:51:00.194976   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:00.195089   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:00.195089   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:00.195089   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:00.200028   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:51:00.699861   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:00.699861   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:00.699942   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:00.699942   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:00.708296   13008 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 11:51:00.708956   13008 node_ready.go:53] node "ha-747000-m02" has status "Ready":"False"
	I0318 11:51:01.191823   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:01.191823   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:01.191823   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:01.191823   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:01.196736   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:51:01.691082   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:01.691138   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:01.691138   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:01.691138   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:01.694486   13008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:51:02.195380   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:02.195380   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:02.195380   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:02.195380   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:02.196588   13008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:51:02.690325   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:02.690494   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:02.690494   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:02.690494   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:02.697627   13008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:51:03.195227   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:03.195227   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:03.195227   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:03.195227   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:03.202502   13008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:51:03.202856   13008 node_ready.go:53] node "ha-747000-m02" has status "Ready":"False"
	I0318 11:51:03.692931   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:03.692931   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:03.693021   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:03.693021   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:03.700007   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:04.185529   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:04.185529   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:04.185529   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:04.185529   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:04.186229   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:04.685057   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:04.685089   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:04.685089   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:04.685089   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:04.687520   13008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:51:05.195824   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:05.196232   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.196232   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.196232   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.196774   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:05.696209   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:05.696209   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.696209   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.696209   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.696745   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:05.702953   13008 node_ready.go:49] node "ha-747000-m02" has status "Ready":"True"
	I0318 11:51:05.703060   13008 node_ready.go:38] duration metric: took 12.0199638s for node "ha-747000-m02" to be "Ready" ...
	I0318 11:51:05.703060   13008 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:51:05.703250   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods
	I0318 11:51:05.703250   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.703250   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.703250   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.714566   13008 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 11:51:05.725230   13008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dhl7r" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:05.725230   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dhl7r
	I0318 11:51:05.725230   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.725230   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.725230   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.729700   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:51:05.730602   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:05.730602   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.730602   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.730602   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.735962   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:51:05.736633   13008 pod_ready.go:92] pod "coredns-5dd5756b68-dhl7r" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:05.736633   13008 pod_ready.go:81] duration metric: took 11.4025ms for pod "coredns-5dd5756b68-dhl7r" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:05.736633   13008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gm84s" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:05.736633   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-gm84s
	I0318 11:51:05.736633   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.736633   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.736633   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.741664   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:51:05.742312   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:05.742312   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.742312   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.742312   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.744529   13008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:51:05.746990   13008 pod_ready.go:92] pod "coredns-5dd5756b68-gm84s" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:05.746990   13008 pod_ready.go:81] duration metric: took 10.3577ms for pod "coredns-5dd5756b68-gm84s" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:05.746990   13008 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:05.746990   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-747000
	I0318 11:51:05.746990   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.746990   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.746990   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.748262   13008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:51:05.752773   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:05.752773   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.752773   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.752773   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.753049   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:05.758101   13008 pod_ready.go:92] pod "etcd-ha-747000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:05.758101   13008 pod_ready.go:81] duration metric: took 11.1105ms for pod "etcd-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:05.758101   13008 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:05.758246   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-747000-m02
	I0318 11:51:05.758246   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.758300   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.758300   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.762686   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:51:05.763353   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:05.763379   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.763379   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.763379   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.763943   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:05.766985   13008 pod_ready.go:92] pod "etcd-ha-747000-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:05.766985   13008 pod_ready.go:81] duration metric: took 8.8842ms for pod "etcd-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:05.766985   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:05.903616   13008 request.go:629] Waited for 135.739ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000
	I0318 11:51:05.903728   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000
	I0318 11:51:05.903728   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:05.903728   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:05.903728   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:05.910468   13008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:51:06.102371   13008 request.go:629] Waited for 190.9299ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:06.102371   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:06.102371   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:06.102371   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:06.102371   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:06.107240   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:51:06.107569   13008 pod_ready.go:92] pod "kube-apiserver-ha-747000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:06.108092   13008 pod_ready.go:81] duration metric: took 341.1041ms for pod "kube-apiserver-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:06.108092   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:06.310953   13008 request.go:629] Waited for 202.3999ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m02
	I0318 11:51:06.311182   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m02
	I0318 11:51:06.311255   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:06.311255   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:06.311255   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:06.312021   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:06.509881   13008 request.go:629] Waited for 193.2234ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:06.510119   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:06.510148   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:06.510148   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:06.510148   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:06.512520   13008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:51:06.516142   13008 pod_ready.go:92] pod "kube-apiserver-ha-747000-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:06.516240   13008 pod_ready.go:81] duration metric: took 408.145ms for pod "kube-apiserver-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:06.516240   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:06.700909   13008 request.go:629] Waited for 184.332ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-747000
	I0318 11:51:06.701018   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-747000
	I0318 11:51:06.701018   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:06.701018   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:06.701018   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:06.701970   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:06.907496   13008 request.go:629] Waited for 200.1456ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:06.907496   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:06.907496   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:06.907496   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:06.907496   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:06.908276   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:06.913728   13008 pod_ready.go:92] pod "kube-controller-manager-ha-747000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:06.913728   13008 pod_ready.go:81] duration metric: took 397.4852ms for pod "kube-controller-manager-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:06.913728   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:07.106352   13008 request.go:629] Waited for 191.8377ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-747000-m02
	I0318 11:51:07.106504   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-747000-m02
	I0318 11:51:07.106504   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:07.106504   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:07.106504   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:07.107219   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:07.302972   13008 request.go:629] Waited for 190.3631ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:07.303046   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:07.303046   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:07.303046   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:07.303111   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:07.303835   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:07.308905   13008 pod_ready.go:92] pod "kube-controller-manager-ha-747000-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:07.308970   13008 pod_ready.go:81] duration metric: took 395.2384ms for pod "kube-controller-manager-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:07.308970   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lp986" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:07.503281   13008 request.go:629] Waited for 194.088ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lp986
	I0318 11:51:07.503374   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lp986
	I0318 11:51:07.503483   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:07.503541   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:07.503541   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:07.503734   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:07.698195   13008 request.go:629] Waited for 188.1492ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:07.698437   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:07.698539   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:07.698539   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:07.698565   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:07.699192   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:07.705153   13008 pod_ready.go:92] pod "kube-proxy-lp986" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:07.705237   13008 pod_ready.go:81] duration metric: took 396.1805ms for pod "kube-proxy-lp986" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:07.705237   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zzg5q" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:07.906523   13008 request.go:629] Waited for 200.9409ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zzg5q
	I0318 11:51:07.906712   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zzg5q
	I0318 11:51:07.906712   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:07.906712   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:07.906712   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:07.906980   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:08.100532   13008 request.go:629] Waited for 187.745ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:08.100884   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:08.100938   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:08.100978   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:08.100978   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:08.101667   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:08.106641   13008 pod_ready.go:92] pod "kube-proxy-zzg5q" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:08.107270   13008 pod_ready.go:81] duration metric: took 402.0303ms for pod "kube-proxy-zzg5q" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:08.107270   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:08.300333   13008 request.go:629] Waited for 192.8372ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-747000
	I0318 11:51:08.300575   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-747000
	I0318 11:51:08.300575   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:08.300701   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:08.300701   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:08.300963   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:08.508266   13008 request.go:629] Waited for 201.778ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:08.508357   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:51:08.508357   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:08.508357   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:08.508357   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:08.508579   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:08.513170   13008 pod_ready.go:92] pod "kube-scheduler-ha-747000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:08.513170   13008 pod_ready.go:81] duration metric: took 405.8968ms for pod "kube-scheduler-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:08.513170   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:08.704091   13008 request.go:629] Waited for 190.7091ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-747000-m02
	I0318 11:51:08.704363   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-747000-m02
	I0318 11:51:08.704460   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:08.704460   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:08.704524   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:08.709791   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:08.910823   13008 request.go:629] Waited for 200.4155ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:08.910950   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:51:08.911151   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:08.911151   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:08.911203   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:08.911527   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:08.917011   13008 pod_ready.go:92] pod "kube-scheduler-ha-747000-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:51:08.917011   13008 pod_ready.go:81] duration metric: took 403.8382ms for pod "kube-scheduler-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:51:08.917011   13008 pod_ready.go:38] duration metric: took 3.2139273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:51:08.917011   13008 api_server.go:52] waiting for apiserver process to appear ...
	I0318 11:51:08.932315   13008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 11:51:08.955570   13008 api_server.go:72] duration metric: took 15.700594s to wait for apiserver process to appear ...
	I0318 11:51:08.955570   13008 api_server.go:88] waiting for apiserver healthz status ...
	I0318 11:51:08.955570   13008 api_server.go:253] Checking apiserver healthz at https://172.30.135.65:8443/healthz ...
	I0318 11:51:08.963864   13008 api_server.go:279] https://172.30.135.65:8443/healthz returned 200:
	ok
	I0318 11:51:08.963918   13008 round_trippers.go:463] GET https://172.30.135.65:8443/version
	I0318 11:51:08.963918   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:08.963918   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:08.963918   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:08.964606   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:08.965945   13008 api_server.go:141] control plane version: v1.28.4
	I0318 11:51:08.966098   13008 api_server.go:131] duration metric: took 10.5281ms to wait for apiserver health ...
	I0318 11:51:08.966098   13008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 11:51:09.108095   13008 request.go:629] Waited for 141.7019ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods
	I0318 11:51:09.108281   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods
	I0318 11:51:09.108281   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:09.108281   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:09.108381   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:09.118382   13008 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 11:51:09.125194   13008 system_pods.go:59] 17 kube-system pods found
	I0318 11:51:09.125279   13008 system_pods.go:61] "coredns-5dd5756b68-dhl7r" [26fc5ceb-a1e3-46f5-b148-40f5312b8628] Running
	I0318 11:51:09.125279   13008 system_pods.go:61] "coredns-5dd5756b68-gm84s" [227e2616-d98e-4ca1-aa66-7e2ed6096d33] Running
	I0318 11:51:09.125279   13008 system_pods.go:61] "etcd-ha-747000" [ac7df0ab-4112-40ce-9646-08b806025178] Running
	I0318 11:51:09.125279   13008 system_pods.go:61] "etcd-ha-747000-m02" [0fb29c85-0d84-4aa9-8f40-041b128f3b62] Running
	I0318 11:51:09.125279   13008 system_pods.go:61] "kindnet-czdhw" [b8b10f3a-4cc3-43d0-a192-438de10116b1] Running
	I0318 11:51:09.125279   13008 system_pods.go:61] "kindnet-zt7pd" [76ca473c-8611-410d-9f13-52d1bfcec7f6] Running
	I0318 11:51:09.125279   13008 system_pods.go:61] "kube-apiserver-ha-747000" [acfdbb0b-15ce-407b-a8c9-bab5477bb1ae] Running
	I0318 11:51:09.125279   13008 system_pods.go:61] "kube-apiserver-ha-747000-m02" [f3b62c80-17e7-435a-9ffd-a2b727a702cd] Running
	I0318 11:51:09.125380   13008 system_pods.go:61] "kube-controller-manager-ha-747000" [7a40710c-8acc-4457-93ba-a31a4073be21] Running
	I0318 11:51:09.125380   13008 system_pods.go:61] "kube-controller-manager-ha-747000-m02" [e191ed3e-7566-4f60-baa0-957f4d7522cc] Running
	I0318 11:51:09.125380   13008 system_pods.go:61] "kube-proxy-lp986" [8567319d-191a-47b4-a5b6-f615907650d3] Running
	I0318 11:51:09.125380   13008 system_pods.go:61] "kube-proxy-zzg5q" [c9403c4f-57d4-4a1c-b629-b8c3f53db9c9] Running
	I0318 11:51:09.125380   13008 system_pods.go:61] "kube-scheduler-ha-747000" [fbfffede-3da2-45f0-abbb-28a58072068b] Running
	I0318 11:51:09.125380   13008 system_pods.go:61] "kube-scheduler-ha-747000-m02" [cdc9ff95-abdf-4913-9e30-dbc8f2785f60] Running
	I0318 11:51:09.125449   13008 system_pods.go:61] "kube-vip-ha-747000" [3e440bfd-5dc2-4981-a26d-c0ae17c250d5] Running
	I0318 11:51:09.125449   13008 system_pods.go:61] "kube-vip-ha-747000-m02" [992c820a-4b58-4baa-b6e4-f0c19a65ad85] Running
	I0318 11:51:09.125449   13008 system_pods.go:61] "storage-provisioner" [de0ac48e-e3cb-430d-8dc6-6e52d350a38e] Running
	I0318 11:51:09.125449   13008 system_pods.go:74] duration metric: took 159.3495ms to wait for pod list to return data ...
	I0318 11:51:09.125449   13008 default_sa.go:34] waiting for default service account to be created ...
	I0318 11:51:09.318977   13008 request.go:629] Waited for 193.2357ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/default/serviceaccounts
	I0318 11:51:09.319056   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/default/serviceaccounts
	I0318 11:51:09.319056   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:09.319056   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:09.319125   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:09.320086   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:09.324309   13008 default_sa.go:45] found service account: "default"
	I0318 11:51:09.324309   13008 default_sa.go:55] duration metric: took 198.8593ms for default service account to be created ...
	I0318 11:51:09.324309   13008 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 11:51:09.496705   13008 request.go:629] Waited for 172.3163ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods
	I0318 11:51:09.497058   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods
	I0318 11:51:09.497157   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:09.497157   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:09.497157   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:09.497921   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:09.513250   13008 system_pods.go:86] 17 kube-system pods found
	I0318 11:51:09.513250   13008 system_pods.go:89] "coredns-5dd5756b68-dhl7r" [26fc5ceb-a1e3-46f5-b148-40f5312b8628] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "coredns-5dd5756b68-gm84s" [227e2616-d98e-4ca1-aa66-7e2ed6096d33] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "etcd-ha-747000" [ac7df0ab-4112-40ce-9646-08b806025178] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "etcd-ha-747000-m02" [0fb29c85-0d84-4aa9-8f40-041b128f3b62] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kindnet-czdhw" [b8b10f3a-4cc3-43d0-a192-438de10116b1] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kindnet-zt7pd" [76ca473c-8611-410d-9f13-52d1bfcec7f6] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kube-apiserver-ha-747000" [acfdbb0b-15ce-407b-a8c9-bab5477bb1ae] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kube-apiserver-ha-747000-m02" [f3b62c80-17e7-435a-9ffd-a2b727a702cd] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kube-controller-manager-ha-747000" [7a40710c-8acc-4457-93ba-a31a4073be21] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kube-controller-manager-ha-747000-m02" [e191ed3e-7566-4f60-baa0-957f4d7522cc] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kube-proxy-lp986" [8567319d-191a-47b4-a5b6-f615907650d3] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kube-proxy-zzg5q" [c9403c4f-57d4-4a1c-b629-b8c3f53db9c9] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kube-scheduler-ha-747000" [fbfffede-3da2-45f0-abbb-28a58072068b] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kube-scheduler-ha-747000-m02" [cdc9ff95-abdf-4913-9e30-dbc8f2785f60] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kube-vip-ha-747000" [3e440bfd-5dc2-4981-a26d-c0ae17c250d5] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "kube-vip-ha-747000-m02" [992c820a-4b58-4baa-b6e4-f0c19a65ad85] Running
	I0318 11:51:09.513250   13008 system_pods.go:89] "storage-provisioner" [de0ac48e-e3cb-430d-8dc6-6e52d350a38e] Running
	I0318 11:51:09.513250   13008 system_pods.go:126] duration metric: took 188.9394ms to wait for k8s-apps to be running ...
	I0318 11:51:09.513250   13008 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 11:51:09.524281   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 11:51:09.550062   13008 system_svc.go:56] duration metric: took 36.8113ms WaitForService to wait for kubelet
	I0318 11:51:09.550062   13008 kubeadm.go:576] duration metric: took 16.2950816s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 11:51:09.550062   13008 node_conditions.go:102] verifying NodePressure condition ...
	I0318 11:51:09.707434   13008 request.go:629] Waited for 157.3708ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes
	I0318 11:51:09.707807   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes
	I0318 11:51:09.707807   13008 round_trippers.go:469] Request Headers:
	I0318 11:51:09.707807   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:51:09.707807   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:51:09.708522   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:51:09.714542   13008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:51:09.714604   13008 node_conditions.go:123] node cpu capacity is 2
	I0318 11:51:09.714604   13008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:51:09.714694   13008 node_conditions.go:123] node cpu capacity is 2
	I0318 11:51:09.714694   13008 node_conditions.go:105] duration metric: took 164.631ms to run NodePressure ...
	I0318 11:51:09.714694   13008 start.go:240] waiting for startup goroutines ...
	I0318 11:51:09.714740   13008 start.go:254] writing updated cluster config ...
	I0318 11:51:09.719533   13008 out.go:177] 
	I0318 11:51:09.729311   13008 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:51:09.729311   13008 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json ...
	I0318 11:51:09.730176   13008 out.go:177] * Starting "ha-747000-m03" control-plane node in "ha-747000" cluster
	I0318 11:51:09.735466   13008 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:51:09.735466   13008 cache.go:56] Caching tarball of preloaded images
	I0318 11:51:09.737723   13008 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 11:51:09.738000   13008 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 11:51:09.738173   13008 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json ...
	I0318 11:51:09.741236   13008 start.go:360] acquireMachinesLock for ha-747000-m03: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 11:51:09.741236   13008 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-747000-m03"
	I0318 11:51:09.741765   13008 start.go:93] Provisioning new machine with config: &{Name:ha-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-747000 Namespace:default APIServerHAVIP:172.30.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.135.65 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.30.142.66 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:51:09.741949   13008 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0318 11:51:09.745525   13008 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 11:51:09.745525   13008 start.go:159] libmachine.API.Create for "ha-747000" (driver="hyperv")
	I0318 11:51:09.745525   13008 client.go:168] LocalClient.Create starting
	I0318 11:51:09.746208   13008 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0318 11:51:09.746208   13008 main.go:141] libmachine: Decoding PEM data...
	I0318 11:51:09.746208   13008 main.go:141] libmachine: Parsing certificate...
	I0318 11:51:09.746208   13008 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0318 11:51:09.747002   13008 main.go:141] libmachine: Decoding PEM data...
	I0318 11:51:09.747002   13008 main.go:141] libmachine: Parsing certificate...
	I0318 11:51:09.747002   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0318 11:51:11.565058   13008 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0318 11:51:11.565406   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:11.565514   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0318 11:51:13.265330   13008 main.go:141] libmachine: [stdout =====>] : False
	
	I0318 11:51:13.265583   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:13.265719   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:51:14.728659   13008 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:51:14.728768   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:14.728768   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:51:18.216772   13008 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:51:18.228236   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:18.230116   13008 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 11:51:18.632818   13008 main.go:141] libmachine: Creating SSH key...
	I0318 11:51:19.295001   13008 main.go:141] libmachine: Creating VM...
	I0318 11:51:19.295001   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 11:51:22.065289   13008 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 11:51:22.076345   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:22.076459   13008 main.go:141] libmachine: Using switch "Default Switch"
	I0318 11:51:22.076520   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 11:51:23.797013   13008 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 11:51:23.804574   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:23.804574   13008 main.go:141] libmachine: Creating VHD
	I0318 11:51:23.804684   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0318 11:51:27.377191   13008 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F1A1A529-D500-4573-A5EA-EF5B4F8ED67E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0318 11:51:27.388348   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:27.388348   13008 main.go:141] libmachine: Writing magic tar header
	I0318 11:51:27.388442   13008 main.go:141] libmachine: Writing SSH key tar header
	I0318 11:51:27.399364   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0318 11:51:30.454995   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:51:30.466256   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:30.466375   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\disk.vhd' -SizeBytes 20000MB
	I0318 11:51:32.946085   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:51:32.946085   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:32.956244   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-747000-m03 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0318 11:51:36.339197   13008 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-747000-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0318 11:51:36.349762   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:36.349762   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-747000-m03 -DynamicMemoryEnabled $false
	I0318 11:51:38.443462   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:51:38.453384   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:38.453384   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-747000-m03 -Count 2
	I0318 11:51:40.473649   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:51:40.484119   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:40.484119   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-747000-m03 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\boot2docker.iso'
	I0318 11:51:42.944520   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:51:42.956734   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:42.956734   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-747000-m03 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\disk.vhd'
	I0318 11:51:45.530757   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:51:45.540000   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:45.540000   13008 main.go:141] libmachine: Starting VM...
	I0318 11:51:45.540000   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-747000-m03
	I0318 11:51:48.459313   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:51:48.459313   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:48.459313   13008 main.go:141] libmachine: Waiting for host to start...
	I0318 11:51:48.459420   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:51:50.651297   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:51:50.651531   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:50.651626   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:51:53.116543   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:51:53.116543   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:54.132151   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:51:56.209332   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:51:56.209332   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:56.221722   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:51:58.676967   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:51:58.676967   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:51:59.677640   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:01.795433   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:01.795433   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:01.795637   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:04.196376   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:52:04.196376   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:05.203896   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:07.293353   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:07.293353   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:07.293507   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:09.762097   13008 main.go:141] libmachine: [stdout =====>] : 
	I0318 11:52:09.762097   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:10.771756   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:12.944306   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:12.944554   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:12.944554   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:15.372484   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:52:15.372542   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:15.372673   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:17.380617   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:17.380617   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:17.380617   13008 machine.go:94] provisionDockerMachine start ...
	I0318 11:52:17.390578   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:19.405757   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:19.405757   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:19.405757   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:21.811228   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:52:21.822287   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:21.828154   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:52:21.835255   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.111 22 <nil> <nil>}
	I0318 11:52:21.835255   13008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 11:52:21.980718   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 11:52:21.980718   13008 buildroot.go:166] provisioning hostname "ha-747000-m03"
	I0318 11:52:21.980718   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:23.980745   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:23.991985   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:23.991985   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:26.389031   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:52:26.389031   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:26.404880   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:52:26.405086   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.111 22 <nil> <nil>}
	I0318 11:52:26.405086   13008 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-747000-m03 && echo "ha-747000-m03" | sudo tee /etc/hostname
	I0318 11:52:26.561884   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-747000-m03
	
	I0318 11:52:26.561884   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:28.562982   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:28.562982   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:28.563127   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:30.946179   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:52:30.946179   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:30.951244   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:52:30.952049   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.111 22 <nil> <nil>}
	I0318 11:52:30.952049   13008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-747000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-747000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-747000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 11:52:31.098440   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 11:52:31.098440   13008 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0318 11:52:31.098977   13008 buildroot.go:174] setting up certificates
	I0318 11:52:31.098977   13008 provision.go:84] configureAuth start
	I0318 11:52:31.099050   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:33.113600   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:33.113706   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:33.113706   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:35.498593   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:52:35.509200   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:35.509261   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:37.486399   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:37.489591   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:37.489648   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:39.915103   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:52:39.926035   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:39.926035   13008 provision.go:143] copyHostCerts
	I0318 11:52:39.926134   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0318 11:52:39.926134   13008 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0318 11:52:39.926134   13008 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0318 11:52:39.926930   13008 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 11:52:39.928096   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0318 11:52:39.928311   13008 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0318 11:52:39.928311   13008 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0318 11:52:39.928311   13008 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 11:52:39.929707   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0318 11:52:39.929707   13008 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0318 11:52:39.929707   13008 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0318 11:52:39.930568   13008 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 11:52:39.931808   13008 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-747000-m03 san=[127.0.0.1 172.30.129.111 ha-747000-m03 localhost minikube]
	I0318 11:52:40.032655   13008 provision.go:177] copyRemoteCerts
	I0318 11:52:40.052272   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 11:52:40.052422   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:42.068198   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:42.068198   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:42.068198   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:44.443769   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:52:44.454482   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:44.454482   13008 sshutil.go:53] new ssh client: &{IP:172.30.129.111 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\id_rsa Username:docker}
	I0318 11:52:44.563353   13008 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5107569s)
	I0318 11:52:44.563565   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 11:52:44.563671   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 11:52:44.607181   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 11:52:44.607181   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 11:52:44.650967   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 11:52:44.651395   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 11:52:44.692104   13008 provision.go:87] duration metric: took 13.5930257s to configureAuth
	I0318 11:52:44.692104   13008 buildroot.go:189] setting minikube options for container-runtime
	I0318 11:52:44.692860   13008 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:52:44.693003   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:46.668910   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:46.669045   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:46.669045   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:49.057457   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:52:49.068991   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:49.075400   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:52:49.075604   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.111 22 <nil> <nil>}
	I0318 11:52:49.075604   13008 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 11:52:49.213300   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 11:52:49.213300   13008 buildroot.go:70] root file system type: tmpfs
	I0318 11:52:49.213610   13008 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 11:52:49.213806   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:51.196271   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:51.207107   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:51.207163   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:53.623795   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:52:53.627461   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:53.633314   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:52:53.634079   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.111 22 <nil> <nil>}
	I0318 11:52:53.634203   13008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.30.135.65"
	Environment="NO_PROXY=172.30.135.65,172.30.142.66"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 11:52:53.795227   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.30.135.65
	Environment=NO_PROXY=172.30.135.65,172.30.142.66
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 11:52:53.795227   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:52:55.773077   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:52:55.773077   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:55.783852   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:52:58.157321   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:52:58.157403   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:52:58.163442   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:52:58.163442   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.111 22 <nil> <nil>}
	I0318 11:52:58.163442   13008 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 11:53:00.239286   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 11:53:00.239340   13008 machine.go:97] duration metric: took 42.8584029s to provisionDockerMachine
	I0318 11:53:00.239367   13008 client.go:171] duration metric: took 1m50.4930149s to LocalClient.Create
	I0318 11:53:00.239446   13008 start.go:167] duration metric: took 1m50.4930939s to libmachine.API.Create "ha-747000"
	I0318 11:53:00.239616   13008 start.go:293] postStartSetup for "ha-747000-m03" (driver="hyperv")
	I0318 11:53:00.239663   13008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 11:53:00.251434   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 11:53:00.251434   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:53:02.207240   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:53:02.217885   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:02.217885   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:53:04.658769   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:53:04.658962   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:04.659127   13008 sshutil.go:53] new ssh client: &{IP:172.30.129.111 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\id_rsa Username:docker}
	I0318 11:53:04.761438   13008 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5099705s)
	I0318 11:53:04.774826   13008 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 11:53:04.781473   13008 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 11:53:04.781473   13008 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0318 11:53:04.782280   13008 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0318 11:53:04.783736   13008 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> 134242.pem in /etc/ssl/certs
	I0318 11:53:04.783736   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /etc/ssl/certs/134242.pem
	I0318 11:53:04.798308   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 11:53:04.816394   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /etc/ssl/certs/134242.pem (1708 bytes)
	I0318 11:53:04.862044   13008 start.go:296] duration metric: took 4.6223931s for postStartSetup
	I0318 11:53:04.865306   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:53:06.907424   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:53:06.907477   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:06.907477   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:53:09.298023   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:53:09.298073   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:09.298073   13008 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\config.json ...
	I0318 11:53:09.300712   13008 start.go:128] duration metric: took 1m59.5578674s to createHost
	I0318 11:53:09.300712   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:53:11.266169   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:53:11.266169   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:11.276483   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:53:13.668227   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:53:13.668375   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:13.673619   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:53:13.674101   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.111 22 <nil> <nil>}
	I0318 11:53:13.674176   13008 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 11:53:13.816856   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710762793.819586720
	
	I0318 11:53:13.816856   13008 fix.go:216] guest clock: 1710762793.819586720
	I0318 11:53:13.816856   13008 fix.go:229] Guest: 2024-03-18 11:53:13.81958672 +0000 UTC Remote: 2024-03-18 11:53:09.3007123 +0000 UTC m=+542.420693601 (delta=4.51887442s)
	I0318 11:53:13.816856   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:53:15.840509   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:53:15.840589   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:15.840660   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:53:18.240907   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:53:18.240907   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:18.245816   13008 main.go:141] libmachine: Using SSH client type: native
	I0318 11:53:18.246741   13008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.111 22 <nil> <nil>}
	I0318 11:53:18.246741   13008 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710762793
	I0318 11:53:18.388282   13008 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 11:53:13 UTC 2024
	
	I0318 11:53:18.388282   13008 fix.go:236] clock set: Mon Mar 18 11:53:13 UTC 2024
	 (err=<nil>)
	I0318 11:53:18.388282   13008 start.go:83] releasing machines lock for "ha-747000-m03", held for 2m8.646083s
	I0318 11:53:18.388282   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:53:20.387927   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:53:20.399958   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:20.400016   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:53:22.837049   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:53:22.837049   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:22.840901   13008 out.go:177] * Found network options:
	I0318 11:53:22.843618   13008 out.go:177]   - NO_PROXY=172.30.135.65,172.30.142.66
	W0318 11:53:22.846083   13008 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 11:53:22.846148   13008 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 11:53:22.848854   13008 out.go:177]   - NO_PROXY=172.30.135.65,172.30.142.66
	W0318 11:53:22.851436   13008 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 11:53:22.851548   13008 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 11:53:22.852727   13008 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 11:53:22.852727   13008 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 11:53:22.854816   13008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 11:53:22.854816   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:53:22.864544   13008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 11:53:22.864544   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 11:53:24.997017   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:53:24.997017   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:53:24.997278   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:24.997017   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:24.997443   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:53:24.997443   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 11:53:27.540134   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:53:27.551154   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:27.551376   13008 sshutil.go:53] new ssh client: &{IP:172.30.129.111 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\id_rsa Username:docker}
	I0318 11:53:27.578099   13008 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 11:53:27.579462   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:27.579493   13008 sshutil.go:53] new ssh client: &{IP:172.30.129.111 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\id_rsa Username:docker}
	I0318 11:53:27.658528   13008 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7939477s)
	W0318 11:53:27.658528   13008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 11:53:27.670007   13008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 11:53:27.804780   13008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 11:53:27.804780   13008 start.go:494] detecting cgroup driver to use...
	I0318 11:53:27.804780   13008 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.949843s)
	I0318 11:53:27.804844   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:53:27.847143   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 11:53:27.879191   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 11:53:27.897171   13008 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 11:53:27.911209   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 11:53:27.941754   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:53:27.972601   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 11:53:28.005444   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 11:53:28.036166   13008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 11:53:28.069132   13008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 11:53:28.101302   13008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 11:53:28.132522   13008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 11:53:28.162446   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:53:28.350841   13008 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 11:53:28.382442   13008 start.go:494] detecting cgroup driver to use...
	I0318 11:53:28.399314   13008 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 11:53:28.434511   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:53:28.468209   13008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 11:53:28.514760   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 11:53:28.549601   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:53:28.584048   13008 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 11:53:28.642122   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 11:53:28.666311   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 11:53:28.715407   13008 ssh_runner.go:195] Run: which cri-dockerd
	I0318 11:53:28.731981   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 11:53:28.748154   13008 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 11:53:28.791106   13008 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 11:53:28.983435   13008 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 11:53:29.149435   13008 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 11:53:29.149435   13008 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 11:53:29.191365   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:53:29.387451   13008 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 11:53:31.878051   13008 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4905816s)
	I0318 11:53:31.889810   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 11:53:31.924945   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:53:31.959564   13008 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 11:53:32.143097   13008 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 11:53:32.324791   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:53:32.502844   13008 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 11:53:32.541915   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 11:53:32.577694   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:53:32.770383   13008 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 11:53:32.869973   13008 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 11:53:32.882191   13008 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 11:53:32.889954   13008 start.go:562] Will wait 60s for crictl version
	I0318 11:53:32.901493   13008 ssh_runner.go:195] Run: which crictl
	I0318 11:53:32.919237   13008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 11:53:32.990422   13008 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 11:53:33.001399   13008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:53:33.040941   13008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 11:53:33.078865   13008 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 11:53:33.085280   13008 out.go:177]   - env NO_PROXY=172.30.135.65
	I0318 11:53:33.088523   13008 out.go:177]   - env NO_PROXY=172.30.135.65,172.30.142.66
	I0318 11:53:33.090716   13008 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 11:53:33.095911   13008 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 11:53:33.095911   13008 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 11:53:33.095911   13008 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 11:53:33.095911   13008 ip.go:207] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:93:df:b5 Flags:up|broadcast|multicast|running}
	I0318 11:53:33.099560   13008 ip.go:210] interface addr: fe80::572b:6b1d:9130:e88b/64
	I0318 11:53:33.099560   13008 ip.go:210] interface addr: 172.30.128.1/20
	I0318 11:53:33.110573   13008 ssh_runner.go:195] Run: grep 172.30.128.1	host.minikube.internal$ /etc/hosts
	I0318 11:53:33.117142   13008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:53:33.135644   13008 mustload.go:65] Loading cluster: ha-747000
	I0318 11:53:33.136250   13008 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:53:33.136589   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:53:35.123341   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:53:35.123341   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:35.123341   13008 host.go:66] Checking if "ha-747000" exists ...
	I0318 11:53:35.134481   13008 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000 for IP: 172.30.129.111
	I0318 11:53:35.134481   13008 certs.go:194] generating shared ca certs ...
	I0318 11:53:35.134481   13008 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:53:35.135633   13008 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0318 11:53:35.135731   13008 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0318 11:53:35.135731   13008 certs.go:256] generating profile certs ...
	I0318 11:53:35.136790   13008 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\client.key
	I0318 11:53:35.137043   13008 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.ebeb1e18
	I0318 11:53:35.137201   13008 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.ebeb1e18 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.30.135.65 172.30.142.66 172.30.129.111 172.30.143.254]
	I0318 11:53:35.258911   13008 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.ebeb1e18 ...
	I0318 11:53:35.258911   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.ebeb1e18: {Name:mk52d5b7a21083f1d5e9cc742d102dd7f89d63c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:53:35.263882   13008 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.ebeb1e18 ...
	I0318 11:53:35.263882   13008 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.ebeb1e18: {Name:mk807ffcb99004c8a1197c4bcac2e074e702d374 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 11:53:35.265155   13008 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt.ebeb1e18 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt
	I0318 11:53:35.269044   13008 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key.ebeb1e18 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key
	I0318 11:53:35.278381   13008 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key
	I0318 11:53:35.278381   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 11:53:35.279542   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 11:53:35.279759   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 11:53:35.279759   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 11:53:35.279759   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 11:53:35.280287   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 11:53:35.280603   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 11:53:35.280696   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 11:53:35.281432   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem (1338 bytes)
	W0318 11:53:35.281704   13008 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424_empty.pem, impossibly tiny 0 bytes
	I0318 11:53:35.281928   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0318 11:53:35.282213   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 11:53:35.282453   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 11:53:35.282587   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 11:53:35.282587   13008 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem (1708 bytes)
	I0318 11:53:35.283271   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /usr/share/ca-certificates/134242.pem
	I0318 11:53:35.283408   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:53:35.283491   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem -> /usr/share/ca-certificates/13424.pem
	I0318 11:53:35.283802   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:53:37.281582   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:53:37.281582   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:37.281695   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:53:39.700207   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:53:39.700207   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:39.711717   13008 sshutil.go:53] new ssh client: &{IP:172.30.135.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\id_rsa Username:docker}
	I0318 11:53:39.811773   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0318 11:53:39.818854   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0318 11:53:39.852019   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0318 11:53:39.859467   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0318 11:53:39.888892   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0318 11:53:39.895195   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0318 11:53:39.923719   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0318 11:53:39.931098   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0318 11:53:39.965720   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0318 11:53:39.973166   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0318 11:53:40.004032   13008 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0318 11:53:40.010159   13008 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0318 11:53:40.029033   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 11:53:40.074090   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0318 11:53:40.119652   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 11:53:40.167186   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 11:53:40.212478   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0318 11:53:40.254743   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 11:53:40.296024   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 11:53:40.338317   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-747000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 11:53:40.382558   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /usr/share/ca-certificates/134242.pem (1708 bytes)
	I0318 11:53:40.423049   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 11:53:40.462810   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem --> /usr/share/ca-certificates/13424.pem (1338 bytes)
	I0318 11:53:40.503040   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0318 11:53:40.531746   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0318 11:53:40.556686   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0318 11:53:40.595126   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0318 11:53:40.624635   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0318 11:53:40.652628   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0318 11:53:40.683323   13008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0318 11:53:40.723736   13008 ssh_runner.go:195] Run: openssl version
	I0318 11:53:40.743604   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 11:53:40.773281   13008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:53:40.779866   13008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 11:07 /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:53:40.792465   13008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 11:53:40.812306   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 11:53:40.841284   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13424.pem && ln -fs /usr/share/ca-certificates/13424.pem /etc/ssl/certs/13424.pem"
	I0318 11:53:40.871415   13008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13424.pem
	I0318 11:53:40.878613   13008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 11:25 /usr/share/ca-certificates/13424.pem
	I0318 11:53:40.892455   13008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13424.pem
	I0318 11:53:40.911639   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13424.pem /etc/ssl/certs/51391683.0"
	I0318 11:53:40.942757   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134242.pem && ln -fs /usr/share/ca-certificates/134242.pem /etc/ssl/certs/134242.pem"
	I0318 11:53:40.973725   13008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134242.pem
	I0318 11:53:40.976524   13008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 11:25 /usr/share/ca-certificates/134242.pem
	I0318 11:53:40.991869   13008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134242.pem
	I0318 11:53:41.011462   13008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134242.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 11:53:41.043154   13008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 11:53:41.048678   13008 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 11:53:41.049092   13008 kubeadm.go:928] updating node {m03 172.30.129.111 8443 v1.28.4 docker true true} ...
	I0318 11:53:41.049324   13008 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-747000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.129.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-747000 Namespace:default APIServerHAVIP:172.30.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 11:53:41.049415   13008 kube-vip.go:111] generating kube-vip config ...
	I0318 11:53:41.059575   13008 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 11:53:41.083984   13008 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 11:53:41.084717   13008 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.30.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 11:53:41.097118   13008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 11:53:41.112801   13008 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 11:53:41.125170   13008 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 11:53:41.141666   13008 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0318 11:53:41.141666   13008 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0318 11:53:41.141666   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 11:53:41.141666   13008 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0318 11:53:41.142372   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 11:53:41.158484   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 11:53:41.159097   13008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 11:53:41.159097   13008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 11:53:41.177832   13008 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 11:53:41.177832   13008 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 11:53:41.177832   13008 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 11:53:41.179459   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 11:53:41.179575   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 11:53:41.191996   13008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 11:53:41.245087   13008 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 11:53:41.245147   13008 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 11:53:42.524669   13008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0318 11:53:42.541555   13008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0318 11:53:42.572244   13008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 11:53:42.600545   13008 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 11:53:42.640394   13008 ssh_runner.go:195] Run: grep 172.30.143.254	control-plane.minikube.internal$ /etc/hosts
	I0318 11:53:42.648279   13008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 11:53:42.679961   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:53:42.876197   13008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:53:42.908285   13008 host.go:66] Checking if "ha-747000" exists ...
	I0318 11:53:42.908446   13008 start.go:316] joinCluster: &{Name:ha-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-747000 Namespace:default APIServerHAVIP:172.30.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.135.65 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.30.142.66 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.30.129.111 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:53:42.909143   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 11:53:42.909255   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 11:53:44.918478   13008 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 11:53:44.918658   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:44.918749   13008 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 11:53:47.325480   13008 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 11:53:47.325480   13008 main.go:141] libmachine: [stderr =====>] : 
	I0318 11:53:47.335787   13008 sshutil.go:53] new ssh client: &{IP:172.30.135.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\id_rsa Username:docker}
	I0318 11:53:47.530228   13008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.6209417s)
	I0318 11:53:47.530341   13008 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.30.129.111 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:53:47.530403   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aen08i.8flpo1ytk7tnbiuh --discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-747000-m03 --control-plane --apiserver-advertise-address=172.30.129.111 --apiserver-bind-port=8443"
	I0318 11:54:31.007118   13008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aen08i.8flpo1ytk7tnbiuh --discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-747000-m03 --control-plane --apiserver-advertise-address=172.30.129.111 --apiserver-bind-port=8443": (43.4763885s)
	I0318 11:54:31.007118   13008 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 11:54:31.765063   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-747000-m03 minikube.k8s.io/updated_at=2024_03_18T11_54_31_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=ha-747000 minikube.k8s.io/primary=false
	I0318 11:54:31.930820   13008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-747000-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0318 11:54:32.123067   13008 start.go:318] duration metric: took 49.2142517s to joinCluster
	I0318 11:54:32.123130   13008 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.30.129.111 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 11:54:32.125981   13008 out.go:177] * Verifying Kubernetes components...
	I0318 11:54:32.124017   13008 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:54:32.141976   13008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 11:54:32.473595   13008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 11:54:32.497210   13008 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 11:54:32.497889   13008 kapi.go:59] client config for ha-747000: &rest.Config{Host:"https://172.30.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-747000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-747000\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e8b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0318 11:54:32.497997   13008 kubeadm.go:477] Overriding stale ClientConfig host https://172.30.143.254:8443 with https://172.30.135.65:8443
	I0318 11:54:32.498642   13008 node_ready.go:35] waiting up to 6m0s for node "ha-747000-m03" to be "Ready" ...
	I0318 11:54:32.498869   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:32.498869   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:32.498927   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:32.498927   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:32.514135   13008 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0318 11:54:33.011606   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:33.011606   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:33.011606   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:33.011606   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:33.017139   13008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:54:33.503217   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:33.503217   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:33.503217   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:33.503217   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:33.509209   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:54:34.007071   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:34.007071   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:34.007071   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:34.007331   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:34.015287   13008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:54:34.513214   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:34.513214   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:34.513214   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:34.513214   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:34.523353   13008 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 11:54:34.523931   13008 node_ready.go:53] node "ha-747000-m03" has status "Ready":"False"
	I0318 11:54:35.003240   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:35.003270   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:35.003270   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:35.003270   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:35.007225   13008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:54:35.511195   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:35.511195   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:35.511195   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:35.511195   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:35.513976   13008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:54:36.011131   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:36.011131   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:36.011131   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:36.011220   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:36.016886   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:36.505219   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:36.505219   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:36.505219   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:36.505219   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:36.505808   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:37.013822   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:37.013822   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:37.013822   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:37.013822   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:37.016052   13008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:54:37.023175   13008 node_ready.go:53] node "ha-747000-m03" has status "Ready":"False"
	I0318 11:54:37.514949   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:37.514949   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:37.514949   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:37.514949   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:37.532465   13008 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0318 11:54:38.018225   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:38.018290   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:38.018322   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:38.018322   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:38.023731   13008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:54:38.509434   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:38.509434   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:38.509434   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:38.509434   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:38.516601   13008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:54:39.013096   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:39.013096   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:39.013096   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:39.013096   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:39.020046   13008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:54:39.499191   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:39.499191   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:39.499191   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:39.499191   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:39.502380   13008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:54:39.504621   13008 node_ready.go:53] node "ha-747000-m03" has status "Ready":"False"
	I0318 11:54:40.003665   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:40.003665   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:40.003665   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:40.003665   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:40.009064   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:54:40.500415   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:40.500591   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:40.500591   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:40.500643   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:40.510756   13008 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 11:54:41.003216   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:41.003216   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:41.003435   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:41.003435   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:41.004143   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:41.522919   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:41.522919   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:41.522919   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:41.522919   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:41.523519   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:41.528435   13008 node_ready.go:53] node "ha-747000-m03" has status "Ready":"False"
	I0318 11:54:42.017612   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:42.017869   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:42.017869   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:42.017869   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:42.022079   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:54:42.515495   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:42.515495   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:42.515495   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:42.515495   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:42.552471   13008 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I0318 11:54:43.020710   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:43.020710   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.020868   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.021018   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.026753   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:54:43.027757   13008 node_ready.go:49] node "ha-747000-m03" has status "Ready":"True"
	I0318 11:54:43.027757   13008 node_ready.go:38] duration metric: took 10.528975s for node "ha-747000-m03" to be "Ready" ...
	I0318 11:54:43.027757   13008 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:54:43.028606   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods
	I0318 11:54:43.028606   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.028606   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.028606   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.033857   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:54:43.046839   13008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dhl7r" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:43.046839   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dhl7r
	I0318 11:54:43.046839   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.046839   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.046839   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.051163   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:54:43.052204   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:43.052204   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.052204   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.052204   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.052480   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:43.056603   13008 pod_ready.go:92] pod "coredns-5dd5756b68-dhl7r" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:43.056603   13008 pod_ready.go:81] duration metric: took 9.7633ms for pod "coredns-5dd5756b68-dhl7r" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:43.056603   13008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gm84s" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:43.057138   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-gm84s
	I0318 11:54:43.057138   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.057138   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.057138   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.057420   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:43.062335   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:43.062335   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.062335   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.062335   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.066952   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:54:43.067939   13008 pod_ready.go:92] pod "coredns-5dd5756b68-gm84s" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:43.067939   13008 pod_ready.go:81] duration metric: took 11.3361ms for pod "coredns-5dd5756b68-gm84s" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:43.067939   13008 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:43.067939   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-747000
	I0318 11:54:43.067939   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.067939   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.067939   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.071885   13008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 11:54:43.072699   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:43.072784   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.072784   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.072784   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.090799   13008 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0318 11:54:43.091913   13008 pod_ready.go:92] pod "etcd-ha-747000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:43.091913   13008 pod_ready.go:81] duration metric: took 23.9742ms for pod "etcd-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:43.091913   13008 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:43.091913   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-747000-m02
	I0318 11:54:43.091913   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.091913   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.091913   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.092554   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:43.097183   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:54:43.097247   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.097247   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.097247   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.097501   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:43.102040   13008 pod_ready.go:92] pod "etcd-ha-747000-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:43.102149   13008 pod_ready.go:81] duration metric: took 10.1749ms for pod "etcd-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:43.102149   13008 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-747000-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:43.231312   13008 request.go:629] Waited for 129.1625ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-747000-m03
	I0318 11:54:43.231566   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-747000-m03
	I0318 11:54:43.231654   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.231654   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.231691   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.232416   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:43.431865   13008 request.go:629] Waited for 194.4979ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:43.432173   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:43.432173   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.432173   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.432173   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.432913   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:43.624336   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-747000-m03
	I0318 11:54:43.624510   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.624543   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.624573   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.636373   13008 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 11:54:43.830872   13008 request.go:629] Waited for 193.5891ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:43.831130   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:43.831130   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:43.831130   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:43.831130   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:43.831870   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:44.103679   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-747000-m03
	I0318 11:54:44.103679   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:44.103679   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:44.103679   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:44.103998   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:44.231286   13008 request.go:629] Waited for 121.1658ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:44.231469   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:44.231469   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:44.231469   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:44.231469   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:44.235214   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:44.611004   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-747000-m03
	I0318 11:54:44.611004   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:44.611004   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:44.611004   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:44.611747   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:44.633673   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:44.633673   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:44.633673   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:44.633673   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:44.634406   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:45.114017   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/etcd-ha-747000-m03
	I0318 11:54:45.114017   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:45.114017   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:45.114017   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:45.114771   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:45.120782   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:45.120782   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:45.120782   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:45.120782   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:45.125619   13008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:54:45.126420   13008 pod_ready.go:92] pod "etcd-ha-747000-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:45.126509   13008 pod_ready.go:81] duration metric: took 2.0243458s for pod "etcd-ha-747000-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:45.126583   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:45.225424   13008 request.go:629] Waited for 98.785ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000
	I0318 11:54:45.225598   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000
	I0318 11:54:45.225666   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:45.225695   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:45.225695   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:45.226475   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:45.428979   13008 request.go:629] Waited for 197.166ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:45.429049   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:45.429049   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:45.429049   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:45.429049   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:45.429689   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:45.434872   13008 pod_ready.go:92] pod "kube-apiserver-ha-747000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:45.434872   13008 pod_ready.go:81] duration metric: took 308.2866ms for pod "kube-apiserver-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:45.434872   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:45.625016   13008 request.go:629] Waited for 189.9507ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m02
	I0318 11:54:45.625111   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m02
	I0318 11:54:45.625111   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:45.625111   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:45.625111   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:45.630614   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:45.829238   13008 request.go:629] Waited for 197.4448ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:54:45.829496   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:54:45.829547   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:45.829581   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:45.829581   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:45.837199   13008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:54:45.837736   13008 pod_ready.go:92] pod "kube-apiserver-ha-747000-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:45.838274   13008 pod_ready.go:81] duration metric: took 402.8606ms for pod "kube-apiserver-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:45.838274   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-747000-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:46.027608   13008 request.go:629] Waited for 189.1438ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m03
	I0318 11:54:46.027698   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m03
	I0318 11:54:46.027805   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:46.027805   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:46.027805   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:46.028052   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:46.241720   13008 request.go:629] Waited for 207.8005ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:46.241720   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:46.242006   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:46.242063   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:46.242130   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:46.242880   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:46.427374   13008 request.go:629] Waited for 71.2914ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m03
	I0318 11:54:46.427374   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m03
	I0318 11:54:46.427615   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:46.427615   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:46.427678   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:46.432158   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:54:46.635779   13008 request.go:629] Waited for 200.8677ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:46.635894   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:46.635894   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:46.635894   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:46.635894   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:46.636212   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:46.839124   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m03
	I0318 11:54:46.839124   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:46.839124   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:46.839124   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:46.840049   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:47.027634   13008 request.go:629] Waited for 182.1083ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:47.027994   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:47.027994   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:47.027994   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:47.027994   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:47.037154   13008 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 11:54:47.339437   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m03
	I0318 11:54:47.339539   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:47.339539   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:47.339602   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:47.339797   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:47.429991   13008 request.go:629] Waited for 85.0472ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:47.430069   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:47.430124   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:47.430150   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:47.430150   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:47.430337   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:47.851381   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-747000-m03
	I0318 11:54:47.851381   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:47.851482   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:47.851482   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:47.851700   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:47.857375   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:47.857375   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:47.857375   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:47.857375   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:47.861983   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:54:47.862886   13008 pod_ready.go:92] pod "kube-apiserver-ha-747000-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:47.862886   13008 pod_ready.go:81] duration metric: took 2.024597s for pod "kube-apiserver-ha-747000-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:47.862886   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:48.024553   13008 request.go:629] Waited for 161.4598ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-747000
	I0318 11:54:48.024636   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-747000
	I0318 11:54:48.024636   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:48.024636   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:48.024636   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:48.029802   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:54:48.234550   13008 request.go:629] Waited for 203.7059ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:48.234733   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:48.234733   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:48.234810   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:48.234810   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:48.235491   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:48.246614   13008 pod_ready.go:92] pod "kube-controller-manager-ha-747000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:48.246674   13008 pod_ready.go:81] duration metric: took 383.7846ms for pod "kube-controller-manager-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:48.246747   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:48.430554   13008 request.go:629] Waited for 183.6822ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-747000-m02
	I0318 11:54:48.430776   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-747000-m02
	I0318 11:54:48.430776   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:48.430914   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:48.430914   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:48.431764   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:48.634914   13008 request.go:629] Waited for 196.8372ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:54:48.635316   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:54:48.635316   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:48.635366   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:48.635366   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:48.635561   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:48.640674   13008 pod_ready.go:92] pod "kube-controller-manager-ha-747000-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:48.640674   13008 pod_ready.go:81] duration metric: took 393.8819ms for pod "kube-controller-manager-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:48.640674   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-747000-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:48.828434   13008 request.go:629] Waited for 187.5478ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-747000-m03
	I0318 11:54:48.828646   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-747000-m03
	I0318 11:54:48.828717   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:48.828717   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:48.828836   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:48.829146   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:49.022530   13008 request.go:629] Waited for 186.0354ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:49.022613   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:49.022613   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:49.022613   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:49.022613   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:49.029435   13008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:54:49.029435   13008 pod_ready.go:92] pod "kube-controller-manager-ha-747000-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:49.030479   13008 pod_ready.go:81] duration metric: took 389.802ms for pod "kube-controller-manager-ha-747000-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:49.030479   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lp986" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:49.229184   13008 request.go:629] Waited for 198.5176ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lp986
	I0318 11:54:49.229388   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lp986
	I0318 11:54:49.229388   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:49.229388   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:49.229388   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:49.235671   13008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:54:49.428316   13008 request.go:629] Waited for 191.5773ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:49.428497   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:49.428612   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:49.428612   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:49.428612   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:49.429247   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:49.434530   13008 pod_ready.go:92] pod "kube-proxy-lp986" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:49.434530   13008 pod_ready.go:81] duration metric: took 404.0483ms for pod "kube-proxy-lp986" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:49.434609   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-njpzx" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:49.623221   13008 request.go:629] Waited for 188.3488ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-njpzx
	I0318 11:54:49.623410   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-njpzx
	I0318 11:54:49.623410   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:49.623410   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:49.623410   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:49.628534   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:54:49.833923   13008 request.go:629] Waited for 203.2399ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:49.834091   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:49.834243   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:49.834243   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:49.834243   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:49.834455   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:49.839013   13008 pod_ready.go:92] pod "kube-proxy-njpzx" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:49.839013   13008 pod_ready.go:81] duration metric: took 404.4015ms for pod "kube-proxy-njpzx" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:49.839552   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zzg5q" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:50.028405   13008 request.go:629] Waited for 188.5068ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zzg5q
	I0318 11:54:50.028510   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zzg5q
	I0318 11:54:50.028510   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:50.028510   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:50.028510   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:50.029035   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:50.226949   13008 request.go:629] Waited for 191.8899ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:54:50.227463   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:54:50.227463   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:50.227463   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:50.227463   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:50.233480   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:54:50.234253   13008 pod_ready.go:92] pod "kube-proxy-zzg5q" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:50.234253   13008 pod_ready.go:81] duration metric: took 394.6973ms for pod "kube-proxy-zzg5q" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:50.234253   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:50.429273   13008 request.go:629] Waited for 195.0194ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-747000
	I0318 11:54:50.429497   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-747000
	I0318 11:54:50.429497   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:50.429497   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:50.429571   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:50.434623   13008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 11:54:50.633159   13008 request.go:629] Waited for 197.8852ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:50.633159   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000
	I0318 11:54:50.633159   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:50.633159   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:50.633159   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:50.635090   13008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 11:54:50.639392   13008 pod_ready.go:92] pod "kube-scheduler-ha-747000" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:50.639460   13008 pod_ready.go:81] duration metric: took 405.204ms for pod "kube-scheduler-ha-747000" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:50.639460   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:50.832363   13008 request.go:629] Waited for 192.4963ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-747000-m02
	I0318 11:54:50.832592   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-747000-m02
	I0318 11:54:50.832630   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:50.832630   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:50.832666   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:50.835450   13008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 11:54:51.028759   13008 request.go:629] Waited for 190.9432ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:54:51.029051   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m02
	I0318 11:54:51.029159   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:51.029159   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:51.029159   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:51.029915   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:51.035684   13008 pod_ready.go:92] pod "kube-scheduler-ha-747000-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:51.035737   13008 pod_ready.go:81] duration metric: took 396.2748ms for pod "kube-scheduler-ha-747000-m02" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:51.035737   13008 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-747000-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:51.232476   13008 request.go:629] Waited for 196.6628ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-747000-m03
	I0318 11:54:51.232861   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-747000-m03
	I0318 11:54:51.232861   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:51.232861   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:51.232861   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:51.233602   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:51.422228   13008 request.go:629] Waited for 182.7479ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:51.422367   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes/ha-747000-m03
	I0318 11:54:51.422367   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:51.422418   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:51.422418   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:51.429432   13008 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 11:54:51.429954   13008 pod_ready.go:92] pod "kube-scheduler-ha-747000-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 11:54:51.429954   13008 pod_ready.go:81] duration metric: took 394.2142ms for pod "kube-scheduler-ha-747000-m03" in "kube-system" namespace to be "Ready" ...
	I0318 11:54:51.429954   13008 pod_ready.go:38] duration metric: took 8.402135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 11:54:51.429954   13008 api_server.go:52] waiting for apiserver process to appear ...
	I0318 11:54:51.442874   13008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 11:54:51.469704   13008 api_server.go:72] duration metric: took 19.3464309s to wait for apiserver process to appear ...
	I0318 11:54:51.469704   13008 api_server.go:88] waiting for apiserver healthz status ...
	I0318 11:54:51.469849   13008 api_server.go:253] Checking apiserver healthz at https://172.30.135.65:8443/healthz ...
	I0318 11:54:51.477537   13008 api_server.go:279] https://172.30.135.65:8443/healthz returned 200:
	ok
	I0318 11:54:51.480560   13008 round_trippers.go:463] GET https://172.30.135.65:8443/version
	I0318 11:54:51.480560   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:51.480560   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:51.480560   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:51.481253   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:51.482581   13008 api_server.go:141] control plane version: v1.28.4
	I0318 11:54:51.482617   13008 api_server.go:131] duration metric: took 12.7676ms to wait for apiserver health ...
	I0318 11:54:51.482664   13008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 11:54:51.632100   13008 request.go:629] Waited for 148.9934ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods
	I0318 11:54:51.632184   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods
	I0318 11:54:51.632184   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:51.632184   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:51.632184   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:51.637067   13008 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 11:54:51.651102   13008 system_pods.go:59] 24 kube-system pods found
	I0318 11:54:51.651102   13008 system_pods.go:61] "coredns-5dd5756b68-dhl7r" [26fc5ceb-a1e3-46f5-b148-40f5312b8628] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "coredns-5dd5756b68-gm84s" [227e2616-d98e-4ca1-aa66-7e2ed6096d33] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "etcd-ha-747000" [ac7df0ab-4112-40ce-9646-08b806025178] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "etcd-ha-747000-m02" [0fb29c85-0d84-4aa9-8f40-041b128f3b62] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "etcd-ha-747000-m03" [71309fb0-67e2-4098-a267-e1677a6b6a5b] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kindnet-82v6x" [7f008b78-2eb1-434c-9867-ccd5216a7ed5] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kindnet-czdhw" [b8b10f3a-4cc3-43d0-a192-438de10116b1] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kindnet-zt7pd" [76ca473c-8611-410d-9f13-52d1bfcec7f6] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-apiserver-ha-747000" [acfdbb0b-15ce-407b-a8c9-bab5477bb1ae] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-apiserver-ha-747000-m02" [f3b62c80-17e7-435a-9ffd-a2b727a702cd] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-apiserver-ha-747000-m03" [88bb5439-b862-4458-b098-b91a6ea2b487] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-controller-manager-ha-747000" [7a40710c-8acc-4457-93ba-a31a4073be21] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-controller-manager-ha-747000-m02" [e191ed3e-7566-4f60-baa0-957f4d7522cc] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-controller-manager-ha-747000-m03" [07072b27-c1da-40fb-9391-90b7f1513dc8] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-proxy-lp986" [8567319d-191a-47b4-a5b6-f615907650d3] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-proxy-njpzx" [7787e24a-9c16-408d-9c81-c8d15d5d66d1] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-proxy-zzg5q" [c9403c4f-57d4-4a1c-b629-b8c3f53db9c9] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-scheduler-ha-747000" [fbfffede-3da2-45f0-abbb-28a58072068b] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-scheduler-ha-747000-m02" [cdc9ff95-abdf-4913-9e30-dbc8f2785f60] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-scheduler-ha-747000-m03" [bececf64-6a59-4c26-b1d4-98f6c4fe1cdc] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-vip-ha-747000" [3e440bfd-5dc2-4981-a26d-c0ae17c250d5] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-vip-ha-747000-m02" [992c820a-4b58-4baa-b6e4-f0c19a65ad85] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "kube-vip-ha-747000-m03" [e8f91767-6cfb-4de1-a362-c29c4560f9d1] Running
	I0318 11:54:51.651102   13008 system_pods.go:61] "storage-provisioner" [de0ac48e-e3cb-430d-8dc6-6e52d350a38e] Running
	I0318 11:54:51.651102   13008 system_pods.go:74] duration metric: took 168.437ms to wait for pod list to return data ...
	I0318 11:54:51.651102   13008 default_sa.go:34] waiting for default service account to be created ...
	I0318 11:54:51.836017   13008 request.go:629] Waited for 184.6178ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/default/serviceaccounts
	I0318 11:54:51.836017   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/default/serviceaccounts
	I0318 11:54:51.836017   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:51.836017   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:51.836017   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:51.836652   13008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0318 11:54:51.841506   13008 default_sa.go:45] found service account: "default"
	I0318 11:54:51.841506   13008 default_sa.go:55] duration metric: took 190.4028ms for default service account to be created ...
	I0318 11:54:51.841506   13008 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 11:54:52.029994   13008 request.go:629] Waited for 188.4861ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods
	I0318 11:54:52.030136   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/namespaces/kube-system/pods
	I0318 11:54:52.030136   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:52.030136   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:52.030136   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:52.038250   13008 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 11:54:52.052253   13008 system_pods.go:86] 24 kube-system pods found
	I0318 11:54:52.052253   13008 system_pods.go:89] "coredns-5dd5756b68-dhl7r" [26fc5ceb-a1e3-46f5-b148-40f5312b8628] Running
	I0318 11:54:52.052253   13008 system_pods.go:89] "coredns-5dd5756b68-gm84s" [227e2616-d98e-4ca1-aa66-7e2ed6096d33] Running
	I0318 11:54:52.052253   13008 system_pods.go:89] "etcd-ha-747000" [ac7df0ab-4112-40ce-9646-08b806025178] Running
	I0318 11:54:52.052253   13008 system_pods.go:89] "etcd-ha-747000-m02" [0fb29c85-0d84-4aa9-8f40-041b128f3b62] Running
	I0318 11:54:52.052253   13008 system_pods.go:89] "etcd-ha-747000-m03" [71309fb0-67e2-4098-a267-e1677a6b6a5b] Running
	I0318 11:54:52.052253   13008 system_pods.go:89] "kindnet-82v6x" [7f008b78-2eb1-434c-9867-ccd5216a7ed5] Running
	I0318 11:54:52.052253   13008 system_pods.go:89] "kindnet-czdhw" [b8b10f3a-4cc3-43d0-a192-438de10116b1] Running
	I0318 11:54:52.052781   13008 system_pods.go:89] "kindnet-zt7pd" [76ca473c-8611-410d-9f13-52d1bfcec7f6] Running
	I0318 11:54:52.052980   13008 system_pods.go:89] "kube-apiserver-ha-747000" [acfdbb0b-15ce-407b-a8c9-bab5477bb1ae] Running
	I0318 11:54:52.053790   13008 system_pods.go:89] "kube-apiserver-ha-747000-m02" [f3b62c80-17e7-435a-9ffd-a2b727a702cd] Running
	I0318 11:54:52.053790   13008 system_pods.go:89] "kube-apiserver-ha-747000-m03" [88bb5439-b862-4458-b098-b91a6ea2b487] Running
	I0318 11:54:52.053790   13008 system_pods.go:89] "kube-controller-manager-ha-747000" [7a40710c-8acc-4457-93ba-a31a4073be21] Running
	I0318 11:54:52.053790   13008 system_pods.go:89] "kube-controller-manager-ha-747000-m02" [e191ed3e-7566-4f60-baa0-957f4d7522cc] Running
	I0318 11:54:52.053790   13008 system_pods.go:89] "kube-controller-manager-ha-747000-m03" [07072b27-c1da-40fb-9391-90b7f1513dc8] Running
	I0318 11:54:52.053790   13008 system_pods.go:89] "kube-proxy-lp986" [8567319d-191a-47b4-a5b6-f615907650d3] Running
	I0318 11:54:52.053790   13008 system_pods.go:89] "kube-proxy-njpzx" [7787e24a-9c16-408d-9c81-c8d15d5d66d1] Running
	I0318 11:54:52.053936   13008 system_pods.go:89] "kube-proxy-zzg5q" [c9403c4f-57d4-4a1c-b629-b8c3f53db9c9] Running
	I0318 11:54:52.053936   13008 system_pods.go:89] "kube-scheduler-ha-747000" [fbfffede-3da2-45f0-abbb-28a58072068b] Running
	I0318 11:54:52.053936   13008 system_pods.go:89] "kube-scheduler-ha-747000-m02" [cdc9ff95-abdf-4913-9e30-dbc8f2785f60] Running
	I0318 11:54:52.053936   13008 system_pods.go:89] "kube-scheduler-ha-747000-m03" [bececf64-6a59-4c26-b1d4-98f6c4fe1cdc] Running
	I0318 11:54:52.053936   13008 system_pods.go:89] "kube-vip-ha-747000" [3e440bfd-5dc2-4981-a26d-c0ae17c250d5] Running
	I0318 11:54:52.053936   13008 system_pods.go:89] "kube-vip-ha-747000-m02" [992c820a-4b58-4baa-b6e4-f0c19a65ad85] Running
	I0318 11:54:52.053936   13008 system_pods.go:89] "kube-vip-ha-747000-m03" [e8f91767-6cfb-4de1-a362-c29c4560f9d1] Running
	I0318 11:54:52.053936   13008 system_pods.go:89] "storage-provisioner" [de0ac48e-e3cb-430d-8dc6-6e52d350a38e] Running
	I0318 11:54:52.053936   13008 system_pods.go:126] duration metric: took 212.4278ms to wait for k8s-apps to be running ...
	I0318 11:54:52.053936   13008 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 11:54:52.062979   13008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 11:54:52.090649   13008 system_svc.go:56] duration metric: took 36.7133ms WaitForService to wait for kubelet
	I0318 11:54:52.090649   13008 kubeadm.go:576] duration metric: took 19.9673718s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 11:54:52.090649   13008 node_conditions.go:102] verifying NodePressure condition ...
	I0318 11:54:52.236107   13008 request.go:629] Waited for 145.1601ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.135.65:8443/api/v1/nodes
	I0318 11:54:52.236107   13008 round_trippers.go:463] GET https://172.30.135.65:8443/api/v1/nodes
	I0318 11:54:52.236107   13008 round_trippers.go:469] Request Headers:
	I0318 11:54:52.236107   13008 round_trippers.go:473]     Accept: application/json, */*
	I0318 11:54:52.236107   13008 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 11:54:52.242653   13008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 11:54:52.244307   13008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:54:52.244385   13008 node_conditions.go:123] node cpu capacity is 2
	I0318 11:54:52.244385   13008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:54:52.244385   13008 node_conditions.go:123] node cpu capacity is 2
	I0318 11:54:52.244385   13008 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 11:54:52.244385   13008 node_conditions.go:123] node cpu capacity is 2
	I0318 11:54:52.244385   13008 node_conditions.go:105] duration metric: took 153.7341ms to run NodePressure ...
	I0318 11:54:52.244385   13008 start.go:240] waiting for startup goroutines ...
	I0318 11:54:52.244492   13008 start.go:254] writing updated cluster config ...
	I0318 11:54:52.254598   13008 ssh_runner.go:195] Run: rm -f paused
	I0318 11:54:52.396312   13008 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 11:54:52.401836   13008 out.go:177] * Done! kubectl is now configured to use "ha-747000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 18 11:50:46 ha-747000 dockerd[1337]: time="2024-03-18T11:50:46.547127411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:50:46 ha-747000 dockerd[1337]: time="2024-03-18T11:50:46.547285511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:50:46 ha-747000 dockerd[1337]: time="2024-03-18T11:50:46.547709111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:50:46 ha-747000 dockerd[1337]: time="2024-03-18T11:50:46.579932965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:50:46 ha-747000 dockerd[1337]: time="2024-03-18T11:50:46.580076865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:50:46 ha-747000 dockerd[1337]: time="2024-03-18T11:50:46.580244566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:50:46 ha-747000 dockerd[1337]: time="2024-03-18T11:50:46.580455766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:55:30 ha-747000 dockerd[1337]: time="2024-03-18T11:55:30.020047218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:55:30 ha-747000 dockerd[1337]: time="2024-03-18T11:55:30.020121919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:55:30 ha-747000 dockerd[1337]: time="2024-03-18T11:55:30.020139019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:55:30 ha-747000 dockerd[1337]: time="2024-03-18T11:55:30.020247021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:55:30 ha-747000 cri-dockerd[1225]: time="2024-03-18T11:55:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/280c7276a55904857afc7a24b65f72bde51b05280c8d6a2201ccac07f937ec74/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 18 11:55:31 ha-747000 cri-dockerd[1225]: time="2024-03-18T11:55:31Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Mar 18 11:55:31 ha-747000 dockerd[1337]: time="2024-03-18T11:55:31.510729238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 11:55:31 ha-747000 dockerd[1337]: time="2024-03-18T11:55:31.511029839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 11:55:31 ha-747000 dockerd[1337]: time="2024-03-18T11:55:31.511051640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:55:31 ha-747000 dockerd[1337]: time="2024-03-18T11:55:31.511194340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 11:56:32 ha-747000 dockerd[1331]: 2024/03/18 11:56:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 11:56:32 ha-747000 dockerd[1331]: 2024/03/18 11:56:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 11:56:33 ha-747000 dockerd[1331]: 2024/03/18 11:56:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 11:56:33 ha-747000 dockerd[1331]: 2024/03/18 11:56:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 11:56:33 ha-747000 dockerd[1331]: 2024/03/18 11:56:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 11:56:33 ha-747000 dockerd[1331]: 2024/03/18 11:56:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 11:56:33 ha-747000 dockerd[1331]: 2024/03/18 11:56:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 11:56:33 ha-747000 dockerd[1331]: 2024/03/18 11:56:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a6e52cd033bfb       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago      Running             busybox                   0                   280c7276a5590       busybox-5b5d89c9d6-qvfgv
	887a467712d1e       22aaebb38f4a9                                                                                         24 minutes ago      Running             kube-vip                  1                   8ad9f7d4fa0ca       kube-vip-ha-747000
	e31c573c030a3       6e38f40d628db                                                                                         24 minutes ago      Running             storage-provisioner       1                   1d4fddfcca420       storage-provisioner
	18e55de0cc622       ead0a4a53df89                                                                                         27 minutes ago      Running             coredns                   0                   6407e6bfc72be       coredns-5dd5756b68-gm84s
	7589c531bb385       ead0a4a53df89                                                                                         27 minutes ago      Running             coredns                   0                   4bcfc9c3e4160       coredns-5dd5756b68-dhl7r
	18bee778eecc2       6e38f40d628db                                                                                         27 minutes ago      Exited              storage-provisioner       0                   1d4fddfcca420       storage-provisioner
	822e0ce0b2821       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              27 minutes ago      Running             kindnet-cni               0                   c7be2c9084768       kindnet-zt7pd
	c1abc9fd4e5d4       83f6cc407eed8                                                                                         28 minutes ago      Running             kube-proxy                0                   840eb62f8951a       kube-proxy-lp986
	7dcbde4d0d9af       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     28 minutes ago      Exited              kube-vip                  0                   8ad9f7d4fa0ca       kube-vip-ha-747000
	ca099f2ea7c45       e3db313c6dbc0                                                                                         28 minutes ago      Running             kube-scheduler            0                   fe2e158c46033       kube-scheduler-ha-747000
	74283b1900542       73deb9a3f7025                                                                                         28 minutes ago      Running             etcd                      0                   012c4fffb8fe5       etcd-ha-747000
	4aadeddfd7048       d058aa5ab969c                                                                                         28 minutes ago      Running             kube-controller-manager   0                   7ef6371bab55e       kube-controller-manager-ha-747000
	baa1747a03bf1       7fe0e6f37db33                                                                                         28 minutes ago      Running             kube-apiserver            0                   1c0df59a4be65       kube-apiserver-ha-747000
	
	
	==> coredns [18e55de0cc62] <==
	[INFO] 10.244.2.2:53469 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.001489907s
	[INFO] 10.244.2.2:39012 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.028446013s
	[INFO] 10.244.0.4:40182 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000214601s
	[INFO] 10.244.0.4:59798 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0001281s
	[INFO] 10.244.0.4:33527 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0000537s
	[INFO] 10.244.1.2:35039 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.039480666s
	[INFO] 10.244.1.2:55177 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000396802s
	[INFO] 10.244.2.2:50563 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001105s
	[INFO] 10.244.2.2:33724 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000217601s
	[INFO] 10.244.2.2:56024 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000881s
	[INFO] 10.244.2.2:52705 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135401s
	[INFO] 10.244.0.4:43180 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197501s
	[INFO] 10.244.0.4:36535 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001286s
	[INFO] 10.244.0.4:50426 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139101s
	[INFO] 10.244.0.4:53622 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070601s
	[INFO] 10.244.1.2:51913 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000728s
	[INFO] 10.244.2.2:34323 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149601s
	[INFO] 10.244.2.2:53501 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125901s
	[INFO] 10.244.0.4:48286 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001447s
	[INFO] 10.244.0.4:53461 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000669s
	[INFO] 10.244.1.2:49508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203701s
	[INFO] 10.244.1.2:60838 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000324002s
	[INFO] 10.244.2.2:39711 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000256301s
	[INFO] 10.244.0.4:35515 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001131s
	[INFO] 10.244.0.4:32994 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000055401s
	
	
	==> coredns [7589c531bb38] <==
	[INFO] 10.244.1.2:45379 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000265501s
	[INFO] 10.244.1.2:49211 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001102s
	[INFO] 10.244.1.2:57661 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001046s
	[INFO] 10.244.2.2:57466 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012422452s
	[INFO] 10.244.2.2:38205 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000252401s
	[INFO] 10.244.2.2:51817 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000052s
	[INFO] 10.244.2.2:56865 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109101s
	[INFO] 10.244.0.4:42080 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008s
	[INFO] 10.244.0.4:43666 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000077s
	[INFO] 10.244.0.4:39236 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000482s
	[INFO] 10.244.0.4:49871 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001124s
	[INFO] 10.244.1.2:42135 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196901s
	[INFO] 10.244.1.2:56767 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110901s
	[INFO] 10.244.1.2:57401 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000464802s
	[INFO] 10.244.2.2:40875 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118201s
	[INFO] 10.244.2.2:38615 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000194901s
	[INFO] 10.244.0.4:58517 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001862s
	[INFO] 10.244.0.4:44049 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000187801s
	[INFO] 10.244.1.2:36544 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000198501s
	[INFO] 10.244.1.2:48948 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000175501s
	[INFO] 10.244.2.2:57433 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162501s
	[INFO] 10.244.2.2:34928 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000357502s
	[INFO] 10.244.2.2:34698 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000187101s
	[INFO] 10.244.0.4:37314 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000353002s
	[INFO] 10.244.0.4:37416 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000521s
	
	
	==> describe nodes <==
	Name:               ha-747000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-747000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-747000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T11_46_59_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 11:46:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-747000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:15:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:11:18 +0000   Mon, 18 Mar 2024 11:46:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:11:18 +0000   Mon, 18 Mar 2024 11:46:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:11:18 +0000   Mon, 18 Mar 2024 11:46:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:11:18 +0000   Mon, 18 Mar 2024 11:47:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.135.65
	  Hostname:    ha-747000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 ce08ddef158b4d849587d82dc7581986
	  System UUID:                54549e36-468b-da42-bfd5-c574fa96660d
	  Boot ID:                    5a3e26a4-5428-4a7d-b86d-7b15eb0c4de5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-qvfgv             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-5dd5756b68-dhl7r             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 coredns-5dd5756b68-gm84s             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-ha-747000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kindnet-zt7pd                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      28m
	  kube-system                 kube-apiserver-ha-747000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-ha-747000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-lp986                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-ha-747000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-vip-ha-747000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node ha-747000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node ha-747000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node ha-747000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node ha-747000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node ha-747000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node ha-747000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28m                node-controller  Node ha-747000 event: Registered Node ha-747000 in Controller
	  Normal  NodeReady                27m                kubelet          Node ha-747000 status is now: NodeReady
	  Normal  RegisteredNode           24m                node-controller  Node ha-747000 event: Registered Node ha-747000 in Controller
	  Normal  RegisteredNode           20m                node-controller  Node ha-747000 event: Registered Node ha-747000 in Controller
	
	
	Name:               ha-747000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-747000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-747000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T11_50_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 11:50:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-747000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:11:50 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 18 Mar 2024 12:10:57 +0000   Mon, 18 Mar 2024 12:12:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 18 Mar 2024 12:10:57 +0000   Mon, 18 Mar 2024 12:12:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 18 Mar 2024 12:10:57 +0000   Mon, 18 Mar 2024 12:12:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 18 Mar 2024 12:10:57 +0000   Mon, 18 Mar 2024 12:12:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.30.142.66
	  Hostname:    ha-747000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc53cac9e9e543ff962af5ea480cd266
	  System UUID:                141d4ba1-8fb9-834e-80ae-b06d45ab9958
	  Boot ID:                    9fd7c119-1d1f-4e05-950a-3108b91a27d5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-bfx2x                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 etcd-ha-747000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         24m
	  kube-system                 kindnet-czdhw                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-apiserver-ha-747000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-controller-manager-ha-747000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-proxy-zzg5q                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-scheduler-ha-747000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-vip-ha-747000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        24m    kube-proxy       
	  Normal  RegisteredNode  24m    node-controller  Node ha-747000-m02 event: Registered Node ha-747000-m02 in Controller
	  Normal  RegisteredNode  24m    node-controller  Node ha-747000-m02 event: Registered Node ha-747000-m02 in Controller
	  Normal  RegisteredNode  20m    node-controller  Node ha-747000-m02 event: Registered Node ha-747000-m02 in Controller
	  Normal  NodeNotReady    2m45s  node-controller  Node ha-747000-m02 status is now: NodeNotReady
	
	
	Name:               ha-747000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-747000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-747000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T11_54_31_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 11:54:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-747000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:15:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:11:18 +0000   Mon, 18 Mar 2024 11:54:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:11:18 +0000   Mon, 18 Mar 2024 11:54:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:11:18 +0000   Mon, 18 Mar 2024 11:54:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:11:18 +0000   Mon, 18 Mar 2024 11:54:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.129.111
	  Hostname:    ha-747000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee0a2b1c16414108b8b6a800e4aecfaf
	  System UUID:                701517aa-f6cd-ee4b-9a77-af927eb87fa0
	  Boot ID:                    05d6600a-6be2-4a25-9a1f-b167d44b23a4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-ln6sd                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 etcd-ha-747000-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kindnet-82v6x                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	  kube-system                 kube-apiserver-ha-747000-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-ha-747000-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-njpzx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-ha-747000-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-vip-ha-747000-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        20m   kube-proxy       
	  Normal  RegisteredNode  20m   node-controller  Node ha-747000-m03 event: Registered Node ha-747000-m03 in Controller
	  Normal  RegisteredNode  20m   node-controller  Node ha-747000-m03 event: Registered Node ha-747000-m03 in Controller
	  Normal  RegisteredNode  20m   node-controller  Node ha-747000-m03 event: Registered Node ha-747000-m03 in Controller
	
	
	Name:               ha-747000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-747000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-747000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T11_59_36_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 11:59:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-747000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:15:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:10:20 +0000   Mon, 18 Mar 2024 11:59:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:10:20 +0000   Mon, 18 Mar 2024 11:59:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:10:20 +0000   Mon, 18 Mar 2024 11:59:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:10:20 +0000   Mon, 18 Mar 2024 11:59:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.128.97
	  Hostname:    ha-747000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ef4017fce6d4704ad0cdf91f525732c
	  System UUID:                8e4ab2b0-6679-7a4f-a7ad-b3ac1af22faf
	  Boot ID:                    9bac9ba4-b546-4220-8b51-019fefd226d2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-n69v8       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-proxy-45q7l    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  RegisteredNode           15m                node-controller  Node ha-747000-m04 event: Registered Node ha-747000-m04 in Controller
	  Normal  NodeHasSufficientMemory  15m (x5 over 15m)  kubelet          Node ha-747000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x5 over 15m)  kubelet          Node ha-747000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x5 over 15m)  kubelet          Node ha-747000-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node ha-747000-m04 event: Registered Node ha-747000-m04 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-747000-m04 event: Registered Node ha-747000-m04 in Controller
	  Normal  NodeReady                15m                kubelet          Node ha-747000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.057208] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.141858] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[Mar18 11:46] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.083583] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.499633] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.179648] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.185540] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +2.694503] systemd-fstab-generator[1178]: Ignoring "noauto" option for root device
	[  +0.171024] systemd-fstab-generator[1190]: Ignoring "noauto" option for root device
	[  +0.163794] systemd-fstab-generator[1202]: Ignoring "noauto" option for root device
	[  +0.242185] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[ +13.323504] systemd-fstab-generator[1323]: Ignoring "noauto" option for root device
	[  +0.099373] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.534558] systemd-fstab-generator[1523]: Ignoring "noauto" option for root device
	[  +5.653589] systemd-fstab-generator[1783]: Ignoring "noauto" option for root device
	[  +0.088517] kauditd_printk_skb: 73 callbacks suppressed
	[  +8.878414] systemd-fstab-generator[2555]: Ignoring "noauto" option for root device
	[  +0.112813] kauditd_printk_skb: 72 callbacks suppressed
	[Mar18 11:47] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.152543] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.070809] kauditd_printk_skb: 14 callbacks suppressed
	[Mar18 11:50] hrtimer: interrupt took 3154204 ns
	[  +8.797389] kauditd_printk_skb: 9 callbacks suppressed
	[  +8.263528] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [74283b190054] <==
	{"level":"warn","ts":"2024-03-18T12:15:16.586449Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.68597Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.712944Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.7421Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.749174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.760136Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.764358Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.770064Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.785547Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.785879Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.796489Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.810019Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.8151Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.819913Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.846439Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.855624Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.865469Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.874315Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.880071Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.885931Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.892494Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.904492Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.917502Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.923255Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:15:16.985774Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7580ae0aa539a180","from":"7580ae0aa539a180","remote-peer-id":"4a37fc735a7b221a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:15:17 up 30 min,  0 users,  load average: 0.44, 0.34, 0.37
	Linux ha-747000 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [822e0ce0b282] <==
	I0318 12:14:42.197266       1 main.go:250] Node ha-747000-m04 has CIDR [10.244.3.0/24] 
	I0318 12:14:52.215446       1 main.go:223] Handling node with IPs: map[172.30.135.65:{}]
	I0318 12:14:52.215498       1 main.go:227] handling current node
	I0318 12:14:52.215511       1 main.go:223] Handling node with IPs: map[172.30.142.66:{}]
	I0318 12:14:52.215519       1 main.go:250] Node ha-747000-m02 has CIDR [10.244.1.0/24] 
	I0318 12:14:52.215923       1 main.go:223] Handling node with IPs: map[172.30.129.111:{}]
	I0318 12:14:52.216035       1 main.go:250] Node ha-747000-m03 has CIDR [10.244.2.0/24] 
	I0318 12:14:52.216377       1 main.go:223] Handling node with IPs: map[172.30.128.97:{}]
	I0318 12:14:52.216419       1 main.go:250] Node ha-747000-m04 has CIDR [10.244.3.0/24] 
	I0318 12:15:02.227042       1 main.go:223] Handling node with IPs: map[172.30.135.65:{}]
	I0318 12:15:02.227081       1 main.go:227] handling current node
	I0318 12:15:02.227094       1 main.go:223] Handling node with IPs: map[172.30.142.66:{}]
	I0318 12:15:02.227104       1 main.go:250] Node ha-747000-m02 has CIDR [10.244.1.0/24] 
	I0318 12:15:02.227236       1 main.go:223] Handling node with IPs: map[172.30.129.111:{}]
	I0318 12:15:02.227332       1 main.go:250] Node ha-747000-m03 has CIDR [10.244.2.0/24] 
	I0318 12:15:02.227537       1 main.go:223] Handling node with IPs: map[172.30.128.97:{}]
	I0318 12:15:02.227564       1 main.go:250] Node ha-747000-m04 has CIDR [10.244.3.0/24] 
	I0318 12:15:12.246247       1 main.go:223] Handling node with IPs: map[172.30.135.65:{}]
	I0318 12:15:12.246291       1 main.go:227] handling current node
	I0318 12:15:12.246308       1 main.go:223] Handling node with IPs: map[172.30.142.66:{}]
	I0318 12:15:12.246315       1 main.go:250] Node ha-747000-m02 has CIDR [10.244.1.0/24] 
	I0318 12:15:12.246691       1 main.go:223] Handling node with IPs: map[172.30.129.111:{}]
	I0318 12:15:12.246707       1 main.go:250] Node ha-747000-m03 has CIDR [10.244.2.0/24] 
	I0318 12:15:12.246925       1 main.go:223] Handling node with IPs: map[172.30.128.97:{}]
	I0318 12:15:12.247071       1 main.go:250] Node ha-747000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [baa1747a03bf] <==
	I0318 11:59:39.878865       1 trace.go:236] Trace[118550798]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.30.135.65,type:*v1.Endpoints,resource:apiServerIPInfo (18-Mar-2024 11:59:39.274) (total time: 604ms):
	Trace[118550798]: ---"Transaction prepared" 86ms (11:59:39.362)
	Trace[118550798]: ---"Txn call completed" 516ms (11:59:39.878)
	Trace[118550798]: [604.712464ms] [604.712464ms] END
	I0318 12:12:19.861152       1 trace.go:236] Trace[583156444]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.30.135.65,type:*v1.Endpoints,resource:apiServerIPInfo (18-Mar-2024 12:12:19.321) (total time: 539ms):
	Trace[583156444]: ---"initial value restored" 93ms (12:12:19.415)
	Trace[583156444]: ---"Txn call completed" 433ms (12:12:19.861)
	Trace[583156444]: [539.523932ms] [539.523932ms] END
	I0318 12:12:20.475009       1 trace.go:236] Trace[576243501]: "Update" accept:application/json, */*,audit-id:30af02ff-0cce-4886-a54c-ac45c13b4f3a,client:172.30.135.65,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (18-Mar-2024 12:12:19.864) (total time: 610ms):
	Trace[576243501]: ["GuaranteedUpdate etcd3" audit-id:30af02ff-0cce-4886-a54c-ac45c13b4f3a,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 609ms (12:12:19.865)
	Trace[576243501]:  ---"Txn call completed" 608ms (12:12:20.474)]
	Trace[576243501]: [610.223668ms] [610.223668ms] END
	I0318 12:12:20.475192       1 trace.go:236] Trace[1087605812]: "List(recursive=true) etcd3" audit-id:,key:/masterleases/,resourceVersion:0,resourceVersionMatch:NotOlderThan,limit:0,continue: (18-Mar-2024 12:12:19.866) (total time: 609ms):
	Trace[1087605812]: [609.099162ms] [609.099162ms] END
	I0318 12:12:20.828135       1 trace.go:236] Trace[22965728]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:7ce59ded-bd37-45b9-9b17-149395603994,client:172.30.129.111,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-747000-m03,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (18-Mar-2024 12:12:20.159) (total time: 668ms):
	Trace[22965728]: ["GuaranteedUpdate etcd3" audit-id:7ce59ded-bd37-45b9-9b17-149395603994,key:/leases/kube-node-lease/ha-747000-m03,type:*coordination.Lease,resource:leases.coordination.k8s.io 668ms (12:12:20.159)
	Trace[22965728]:  ---"Txn call completed" 667ms (12:12:20.827)]
	Trace[22965728]: [668.83973ms] [668.83973ms] END
	I0318 12:12:20.831989       1 trace.go:236] Trace[509799681]: "Get" accept:application/json, */*,audit-id:916f0a14-7634-430b-bb7c-807484f0ff8e,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (18-Mar-2024 12:12:20.170) (total time: 660ms):
	Trace[509799681]: ---"About to write a response" 660ms (12:12:20.831)
	Trace[509799681]: [660.967983ms] [660.967983ms] END
	I0318 12:12:25.629511       1 trace.go:236] Trace[1964165248]: "Update" accept:application/json, */*,audit-id:df7e0d57-8383-4a36-82a5-793e0bfbd2c6,client:172.30.135.65,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (18-Mar-2024 12:12:25.122) (total time: 507ms):
	Trace[1964165248]: ["GuaranteedUpdate etcd3" audit-id:df7e0d57-8383-4a36-82a5-793e0bfbd2c6,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 507ms (12:12:25.122)
	Trace[1964165248]:  ---"Txn call completed" 505ms (12:12:25.629)]
	Trace[1964165248]: [507.460736ms] [507.460736ms] END
	
	
	==> kube-controller-manager [4aadeddfd704] <==
	I0318 11:55:32.355805       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="21.327989ms"
	I0318 11:55:32.358391       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="2.45611ms"
	I0318 11:59:36.154225       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-747000-m04\" does not exist"
	I0318 11:59:36.244895       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-45q7l"
	I0318 11:59:36.245102       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-n69v8"
	I0318 11:59:36.270008       1 range_allocator.go:380] "Set node PodCIDR" node="ha-747000-m04" podCIDRs=["10.244.3.0/24"]
	I0318 11:59:36.392521       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-r2rqf"
	I0318 11:59:36.464208       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-4w5j2"
	I0318 11:59:36.540052       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-gqpdw"
	I0318 11:59:36.584000       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-qkb9l"
	I0318 11:59:41.149359       1 event.go:307] "Event occurred" object="ha-747000-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-747000-m04 event: Registered Node ha-747000-m04 in Controller"
	I0318 11:59:41.170393       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-747000-m04"
	I0318 11:59:54.114377       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-747000-m04"
	I0318 12:12:31.353547       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-747000-m04"
	I0318 12:12:31.357443       1 event.go:307] "Event occurred" object="ha-747000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-747000-m02 status is now: NodeNotReady"
	I0318 12:12:31.380150       1 event.go:307] "Event occurred" object="kube-system/kube-vip-ha-747000-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:12:31.399602       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-bfx2x" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:12:31.426342       1 event.go:307] "Event occurred" object="kube-system/kindnet-czdhw" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:12:31.462275       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-zzg5q" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:12:31.488158       1 event.go:307] "Event occurred" object="kube-system/etcd-ha-747000-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:12:31.493519       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="85.915331ms"
	I0318 12:12:31.494477       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="120.001µs"
	I0318 12:12:31.522812       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-ha-747000-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:12:31.557474       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-ha-747000-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:12:31.580450       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-ha-747000-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [c1abc9fd4e5d] <==
	I0318 11:47:12.874229       1 server_others.go:69] "Using iptables proxy"
	I0318 11:47:12.891619       1 node.go:141] Successfully retrieved node IP: 172.30.135.65
	I0318 11:47:12.982620       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 11:47:12.982724       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 11:47:12.987269       1 server_others.go:152] "Using iptables Proxier"
	I0318 11:47:12.987438       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 11:47:12.988545       1 server.go:846] "Version info" version="v1.28.4"
	I0318 11:47:12.988631       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 11:47:12.991538       1 config.go:188] "Starting service config controller"
	I0318 11:47:12.991735       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 11:47:12.991842       1 config.go:97] "Starting endpoint slice config controller"
	I0318 11:47:12.991919       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 11:47:12.993004       1 config.go:315] "Starting node config controller"
	I0318 11:47:12.993176       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 11:47:13.092953       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 11:47:13.092961       1 shared_informer.go:318] Caches are synced for service config
	I0318 11:47:13.093649       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [ca099f2ea7c4] <==
	W0318 11:46:55.980584       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 11:46:55.980611       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 11:46:56.002422       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 11:46:56.002552       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 11:46:56.019052       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 11:46:56.019092       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 11:46:56.101868       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 11:46:56.101940       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 11:46:56.123495       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 11:46:56.123634       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 11:46:56.328059       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 11:46:56.328212       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 11:46:56.336959       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 11:46:56.337015       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 11:46:56.370944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 11:46:56.370987       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 11:46:56.378864       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 11:46:56.379183       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 11:46:56.393905       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 11:46:56.394084       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 11:46:58.092942       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0318 11:55:28.041529       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-bfx2x\": pod busybox-5b5d89c9d6-bfx2x is already assigned to node \"ha-747000-m02\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-bfx2x" node="ha-747000-m02"
	E0318 11:55:28.043346       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod f64d5cfe-1d7c-41d7-8dd8-779eee53eaf2(default/busybox-5b5d89c9d6-bfx2x) wasn't assumed so cannot be forgotten"
	E0318 11:55:28.045884       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-bfx2x\": pod busybox-5b5d89c9d6-bfx2x is already assigned to node \"ha-747000-m02\"" pod="default/busybox-5b5d89c9d6-bfx2x"
	I0318 11:55:28.047121       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-bfx2x" node="ha-747000-m02"
	
	
	==> kubelet <==
	Mar 18 12:10:58 ha-747000 kubelet[2576]: E0318 12:10:58.884901    2576 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:10:58 ha-747000 kubelet[2576]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:10:58 ha-747000 kubelet[2576]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:10:58 ha-747000 kubelet[2576]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:10:58 ha-747000 kubelet[2576]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:11:58 ha-747000 kubelet[2576]: E0318 12:11:58.893369    2576 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:11:58 ha-747000 kubelet[2576]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:11:58 ha-747000 kubelet[2576]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:11:58 ha-747000 kubelet[2576]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:11:58 ha-747000 kubelet[2576]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:12:58 ha-747000 kubelet[2576]: E0318 12:12:58.882288    2576 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:12:58 ha-747000 kubelet[2576]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:12:58 ha-747000 kubelet[2576]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:12:58 ha-747000 kubelet[2576]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:12:58 ha-747000 kubelet[2576]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:13:58 ha-747000 kubelet[2576]: E0318 12:13:58.881132    2576 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:13:58 ha-747000 kubelet[2576]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:13:58 ha-747000 kubelet[2576]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:13:58 ha-747000 kubelet[2576]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:13:58 ha-747000 kubelet[2576]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:14:58 ha-747000 kubelet[2576]: E0318 12:14:58.888475    2576 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:14:58 ha-747000 kubelet[2576]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:14:58 ha-747000 kubelet[2576]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:14:58 ha-747000 kubelet[2576]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:14:58 ha-747000 kubelet[2576]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 12:15:08.978255    4996 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-747000 -n ha-747000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-747000 -n ha-747000: (12.0052441s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-747000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (139.37s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (55.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-894400 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-894400 -- exec busybox-5b5d89c9d6-8btgf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-894400 -- exec busybox-5b5d89c9d6-8btgf -- sh -c "ping -c 1 172.30.128.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-894400 -- exec busybox-5b5d89c9d6-8btgf -- sh -c "ping -c 1 172.30.128.1": exit status 1 (10.4720757s)

                                                
                                                
-- stdout --
	PING 172.30.128.1 (172.30.128.1): 56 data bytes
	
	--- 172.30.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 12:51:27.395894   11696 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.30.128.1) from pod (busybox-5b5d89c9d6-8btgf): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-894400 -- exec busybox-5b5d89c9d6-c2997 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-894400 -- exec busybox-5b5d89c9d6-c2997 -- sh -c "ping -c 1 172.30.128.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-894400 -- exec busybox-5b5d89c9d6-c2997 -- sh -c "ping -c 1 172.30.128.1": exit status 1 (10.4748666s)

                                                
                                                
-- stdout --
	PING 172.30.128.1 (172.30.128.1): 56 data bytes
	
	--- 172.30.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 12:51:38.363948   13784 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.30.128.1) from pod (busybox-5b5d89c9d6-c2997): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-894400 -n multinode-894400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-894400 -n multinode-894400: (11.7047195s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 logs -n 25
E0318 12:52:05.314748   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 logs -n 25: (8.2679378s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-611000 ssh -- ls                    | mount-start-2-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:40 UTC | 18 Mar 24 12:41 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-611000                           | mount-start-1-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:41 UTC | 18 Mar 24 12:41 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-611000 ssh -- ls                    | mount-start-2-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:41 UTC | 18 Mar 24 12:41 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-611000                           | mount-start-2-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:41 UTC | 18 Mar 24 12:42 UTC |
	| start   | -p mount-start-2-611000                           | mount-start-2-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:42 UTC | 18 Mar 24 12:43 UTC |
	| mount   | C:\Users\jenkins.minikube3:/minikube-host         | mount-start-2-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:43 UTC |                     |
	|         | --profile mount-start-2-611000 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-611000 ssh -- ls                    | mount-start-2-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:43 UTC | 18 Mar 24 12:44 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-611000                           | mount-start-2-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:44 UTC | 18 Mar 24 12:44 UTC |
	| delete  | -p mount-start-1-611000                           | mount-start-1-611000 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:44 UTC | 18 Mar 24 12:44 UTC |
	| start   | -p multinode-894400                               | multinode-894400     | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:44 UTC | 18 Mar 24 12:50 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-894400 -- apply -f                   | multinode-894400     | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:51 UTC | 18 Mar 24 12:51 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-894400 -- rollout                    | multinode-894400     | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:51 UTC | 18 Mar 24 12:51 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-894400 -- get pods -o                | multinode-894400     | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:51 UTC | 18 Mar 24 12:51 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-894400 -- get pods -o                | multinode-894400     | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:51 UTC | 18 Mar 24 12:51 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-894400 -- exec                       | multinode-894400     | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:51 UTC | 18 Mar 24 12:51 UTC |
	|         | busybox-5b5d89c9d6-8btgf --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-894400 -- exec                       | multinode-894400     | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:51 UTC | 18 Mar 24 12:51 UTC |
	|         | busybox-5b5d89c9d6-c2997 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-894400 -- exec                       | multinode-894400     | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:51 UTC | 18 Mar 24 12:51 UTC |
	|         | busybox-5b5d89c9d6-8btgf --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-894400 -- exec                       | multinode-894400     | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:51 UTC | 18 Mar 24 12:51 UTC |
	|         | busybox-5b5d89c9d6-c2997 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-894400 -- exec                       | multinode-894400     | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:51 UTC | 18 Mar 24 12:51 UTC |
	|         | busybox-5b5d89c9d6-8btgf -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-894400 -- exec                       | multinode-894400     | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:51 UTC | 18 Mar 24 12:51 UTC |
	|         | busybox-5b5d89c9d6-c2997 -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-894400 -- get pods -o                | multinode-894400     | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:51 UTC | 18 Mar 24 12:51 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-894400 -- exec                       | multinode-894400     | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:51 UTC | 18 Mar 24 12:51 UTC |
	|         | busybox-5b5d89c9d6-8btgf                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-894400 -- exec                       | multinode-894400     | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:51 UTC |                     |
	|         | busybox-5b5d89c9d6-8btgf -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.30.128.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-894400 -- exec                       | multinode-894400     | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:51 UTC | 18 Mar 24 12:51 UTC |
	|         | busybox-5b5d89c9d6-c2997                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-894400 -- exec                       | multinode-894400     | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:51 UTC |                     |
	|         | busybox-5b5d89c9d6-c2997 -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.30.128.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 12:44:31
	Running on machine: minikube3
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 12:44:31.243068   11340 out.go:291] Setting OutFile to fd 920 ...
	I0318 12:44:31.244290   11340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:44:31.244353   11340 out.go:304] Setting ErrFile to fd 608...
	I0318 12:44:31.244353   11340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:44:31.265956   11340 out.go:298] Setting JSON to false
	I0318 12:44:31.269206   11340 start.go:129] hostinfo: {"hostname":"minikube3","uptime":314448,"bootTime":1710451423,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0318 12:44:31.269206   11340 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 12:44:31.277872   11340 out.go:177] * [multinode-894400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0318 12:44:31.280967   11340 notify.go:220] Checking for updates...
	I0318 12:44:31.283123   11340 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 12:44:31.286012   11340 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 12:44:31.288976   11340 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0318 12:44:31.293598   11340 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 12:44:31.297230   11340 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 12:44:31.300935   11340 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:44:31.301076   11340 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 12:44:36.288649   11340 out.go:177] * Using the hyperv driver based on user configuration
	I0318 12:44:36.292566   11340 start.go:297] selected driver: hyperv
	I0318 12:44:36.292566   11340 start.go:901] validating driver "hyperv" against <nil>
	I0318 12:44:36.292566   11340 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 12:44:36.339711   11340 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 12:44:36.340912   11340 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:44:36.341433   11340 cni.go:84] Creating CNI manager for ""
	I0318 12:44:36.341534   11340 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0318 12:44:36.341534   11340 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 12:44:36.341534   11340 start.go:340] cluster config:
	{Name:multinode-894400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-894400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:44:36.341534   11340 iso.go:125] acquiring lock: {Name:mkefa7a887b9de67c22e0418da52288a5b488924 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:44:36.346698   11340 out.go:177] * Starting "multinode-894400" primary control-plane node in "multinode-894400" cluster
	I0318 12:44:36.349192   11340 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 12:44:36.349192   11340 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 12:44:36.349192   11340 cache.go:56] Caching tarball of preloaded images
	I0318 12:44:36.349848   11340 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 12:44:36.349848   11340 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 12:44:36.349848   11340 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\config.json ...
	I0318 12:44:36.350439   11340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\config.json: {Name:mkea21cd75895cc7d1652b3151a2133cc0bfe56c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:44:36.351725   11340 start.go:360] acquireMachinesLock for multinode-894400: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 12:44:36.351725   11340 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-894400"
	I0318 12:44:36.351725   11340 start.go:93] Provisioning new machine with config: &{Name:multinode-894400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:multinode-894400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 12:44:36.352365   11340 start.go:125] createHost starting for "" (driver="hyperv")
	I0318 12:44:36.355531   11340 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 12:44:36.356186   11340 start.go:159] libmachine.API.Create for "multinode-894400" (driver="hyperv")
	I0318 12:44:36.356186   11340 client.go:168] LocalClient.Create starting
	I0318 12:44:36.356873   11340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0318 12:44:36.356873   11340 main.go:141] libmachine: Decoding PEM data...
	I0318 12:44:36.356873   11340 main.go:141] libmachine: Parsing certificate...
	I0318 12:44:36.357471   11340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0318 12:44:36.357471   11340 main.go:141] libmachine: Decoding PEM data...
	I0318 12:44:36.357471   11340 main.go:141] libmachine: Parsing certificate...
	I0318 12:44:36.357471   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0318 12:44:38.273550   11340 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0318 12:44:38.273550   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:44:38.274573   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0318 12:44:39.866275   11340 main.go:141] libmachine: [stdout =====>] : False
	
	I0318 12:44:39.867021   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:44:39.867021   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 12:44:41.273912   11340 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 12:44:41.274078   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:44:41.274078   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 12:44:44.656054   11340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 12:44:44.656054   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:44:44.659415   11340 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 12:44:45.065826   11340 main.go:141] libmachine: Creating SSH key...
	I0318 12:44:45.318960   11340 main.go:141] libmachine: Creating VM...
	I0318 12:44:45.318960   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 12:44:48.067064   11340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 12:44:48.067064   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:44:48.067299   11340 main.go:141] libmachine: Using switch "Default Switch"
	I0318 12:44:48.067425   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 12:44:49.724306   11340 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 12:44:49.724306   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:44:49.724439   11340 main.go:141] libmachine: Creating VHD
	I0318 12:44:49.724439   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0318 12:44:53.244489   11340 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 22A052FD-75B5-48E2-ACF5-195947F2E6F6
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0318 12:44:53.244558   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:44:53.244558   11340 main.go:141] libmachine: Writing magic tar header
	I0318 12:44:53.244653   11340 main.go:141] libmachine: Writing SSH key tar header
	I0318 12:44:53.253622   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0318 12:44:56.300170   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:44:56.300245   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:44:56.300347   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\disk.vhd' -SizeBytes 20000MB
	I0318 12:44:58.875880   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:44:58.875880   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:44:58.876125   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-894400 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0318 12:45:02.395440   11340 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-894400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0318 12:45:02.395440   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:02.395534   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-894400 -DynamicMemoryEnabled $false
	I0318 12:45:04.579048   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:45:04.579048   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:04.579290   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-894400 -Count 2
	I0318 12:45:06.647619   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:45:06.647838   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:06.647914   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-894400 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\boot2docker.iso'
	I0318 12:45:09.078697   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:45:09.078697   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:09.078697   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-894400 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\disk.vhd'
	I0318 12:45:11.562899   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:45:11.562899   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:11.562899   11340 main.go:141] libmachine: Starting VM...
	I0318 12:45:11.563343   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-894400
	I0318 12:45:14.474521   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:45:14.474521   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:14.474521   11340 main.go:141] libmachine: Waiting for host to start...
	I0318 12:45:14.474769   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:45:16.616669   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:45:16.617091   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:16.617168   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:45:18.981154   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:45:18.981208   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:19.987486   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:45:22.127040   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:45:22.127040   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:22.127388   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:45:24.568658   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:45:24.568658   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:25.574794   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:45:27.676013   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:45:27.676013   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:27.676943   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:45:30.075239   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:45:30.075239   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:31.077961   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:45:33.141284   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:45:33.141982   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:33.142104   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:45:35.568731   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:45:35.568939   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:36.578260   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:45:38.687294   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:45:38.687294   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:38.687294   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:45:41.047721   11340 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 12:45:41.047757   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:41.047757   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:45:42.991926   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:45:42.991926   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:42.991926   11340 machine.go:94] provisionDockerMachine start ...
	I0318 12:45:42.992604   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:45:44.959328   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:45:44.959390   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:44.959390   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:45:47.322686   11340 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 12:45:47.322686   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:47.329234   11340 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:47.338023   11340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.141 22 <nil> <nil>}
	I0318 12:45:47.338023   11340 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 12:45:47.449014   11340 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 12:45:47.449014   11340 buildroot.go:166] provisioning hostname "multinode-894400"
	I0318 12:45:47.449014   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:45:49.429311   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:45:49.429693   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:49.429767   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:45:51.811471   11340 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 12:45:51.811883   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:51.817127   11340 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:51.817866   11340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.141 22 <nil> <nil>}
	I0318 12:45:51.817866   11340 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-894400 && echo "multinode-894400" | sudo tee /etc/hostname
	I0318 12:45:51.953743   11340 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-894400
	
	I0318 12:45:51.953897   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:45:53.924719   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:45:53.924719   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:53.925678   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:45:56.332803   11340 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 12:45:56.333074   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:56.337959   11340 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:56.338687   11340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.141 22 <nil> <nil>}
	I0318 12:45:56.338687   11340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-894400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-894400/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-894400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 12:45:56.475691   11340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:45:56.475691   11340 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0318 12:45:56.475691   11340 buildroot.go:174] setting up certificates
	I0318 12:45:56.475691   11340 provision.go:84] configureAuth start
	I0318 12:45:56.475691   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:45:58.491150   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:45:58.491790   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:45:58.491790   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:46:00.891276   11340 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 12:46:00.891276   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:00.891939   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:46:02.892537   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:46:02.893433   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:02.893702   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:46:05.284484   11340 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 12:46:05.284484   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:05.284715   11340 provision.go:143] copyHostCerts
	I0318 12:46:05.284715   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0318 12:46:05.284715   11340 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0318 12:46:05.284715   11340 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0318 12:46:05.285471   11340 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 12:46:05.286906   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0318 12:46:05.287568   11340 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0318 12:46:05.287622   11340 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0318 12:46:05.287622   11340 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 12:46:05.288884   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0318 12:46:05.289587   11340 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0318 12:46:05.289587   11340 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0318 12:46:05.289587   11340 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 12:46:05.291057   11340 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-894400 san=[127.0.0.1 172.30.129.141 localhost minikube multinode-894400]
	I0318 12:46:05.511957   11340 provision.go:177] copyRemoteCerts
	I0318 12:46:05.522957   11340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 12:46:05.522957   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:46:07.490638   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:46:07.491141   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:07.491289   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:46:09.837142   11340 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 12:46:09.837142   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:09.837625   11340 sshutil.go:53] new ssh client: &{IP:172.30.129.141 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\id_rsa Username:docker}
	I0318 12:46:09.942697   11340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4196619s)
	I0318 12:46:09.942756   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 12:46:09.942904   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 12:46:09.986099   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 12:46:09.986517   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0318 12:46:10.026982   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 12:46:10.027201   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 12:46:10.070306   11340 provision.go:87] duration metric: took 13.5945127s to configureAuth
	I0318 12:46:10.070306   11340 buildroot.go:189] setting minikube options for container-runtime
	I0318 12:46:10.071309   11340 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:46:10.071309   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:46:12.068927   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:46:12.069239   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:12.069239   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:46:14.420373   11340 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 12:46:14.421536   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:14.426557   11340 main.go:141] libmachine: Using SSH client type: native
	I0318 12:46:14.427339   11340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.141 22 <nil> <nil>}
	I0318 12:46:14.427339   11340 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 12:46:14.542499   11340 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 12:46:14.542499   11340 buildroot.go:70] root file system type: tmpfs
	I0318 12:46:14.542729   11340 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 12:46:14.542729   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:46:16.510259   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:46:16.510259   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:16.510259   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:46:18.975935   11340 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 12:46:18.976485   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:18.982089   11340 main.go:141] libmachine: Using SSH client type: native
	I0318 12:46:18.982806   11340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.141 22 <nil> <nil>}
	I0318 12:46:18.982806   11340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 12:46:19.120654   11340 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 12:46:19.120797   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:46:21.166130   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:46:21.166724   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:21.166824   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:46:23.517324   11340 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 12:46:23.517666   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:23.523939   11340 main.go:141] libmachine: Using SSH client type: native
	I0318 12:46:23.524540   11340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.141 22 <nil> <nil>}
	I0318 12:46:23.524540   11340 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 12:46:25.571278   11340 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 12:46:25.571278   11340 machine.go:97] duration metric: took 42.5790333s to provisionDockerMachine
	I0318 12:46:25.571373   11340 client.go:171] duration metric: took 1m49.2143672s to LocalClient.Create
	I0318 12:46:25.571373   11340 start.go:167] duration metric: took 1m49.2143672s to libmachine.API.Create "multinode-894400"
	I0318 12:46:25.571373   11340 start.go:293] postStartSetup for "multinode-894400" (driver="hyperv")
	I0318 12:46:25.571465   11340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 12:46:25.584876   11340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 12:46:25.584876   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:46:27.605508   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:46:27.605508   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:27.606057   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:46:30.008319   11340 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 12:46:30.008499   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:30.008724   11340 sshutil.go:53] new ssh client: &{IP:172.30.129.141 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\id_rsa Username:docker}
	I0318 12:46:30.107117   11340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5221428s)
	I0318 12:46:30.119349   11340 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 12:46:30.127562   11340 command_runner.go:130] > NAME=Buildroot
	I0318 12:46:30.127829   11340 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0318 12:46:30.127829   11340 command_runner.go:130] > ID=buildroot
	I0318 12:46:30.127829   11340 command_runner.go:130] > VERSION_ID=2023.02.9
	I0318 12:46:30.127829   11340 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0318 12:46:30.127829   11340 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 12:46:30.127961   11340 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0318 12:46:30.128385   11340 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0318 12:46:30.129223   11340 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> 134242.pem in /etc/ssl/certs
	I0318 12:46:30.129505   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /etc/ssl/certs/134242.pem
	I0318 12:46:30.140736   11340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 12:46:30.157157   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /etc/ssl/certs/134242.pem (1708 bytes)
	I0318 12:46:30.205619   11340 start.go:296] duration metric: took 4.6341196s for postStartSetup
	I0318 12:46:30.208716   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:46:32.193118   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:46:32.193118   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:32.193309   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:46:34.558076   11340 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 12:46:34.558557   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:34.558557   11340 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\config.json ...
	I0318 12:46:34.561937   11340 start.go:128] duration metric: took 1m58.2086852s to createHost
	I0318 12:46:34.561937   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:46:36.531232   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:46:36.531371   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:36.531371   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:46:38.887959   11340 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 12:46:38.888448   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:38.893619   11340 main.go:141] libmachine: Using SSH client type: native
	I0318 12:46:38.894286   11340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.141 22 <nil> <nil>}
	I0318 12:46:38.894286   11340 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 12:46:39.011486   11340 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710765999.006359255
	
	I0318 12:46:39.011486   11340 fix.go:216] guest clock: 1710765999.006359255
	I0318 12:46:39.011604   11340 fix.go:229] Guest: 2024-03-18 12:46:39.006359255 +0000 UTC Remote: 2024-03-18 12:46:34.561937 +0000 UTC m=+123.488229501 (delta=4.444422255s)
	I0318 12:46:39.011681   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:46:41.021715   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:46:41.022268   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:41.022384   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:46:43.472622   11340 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 12:46:43.473053   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:43.479465   11340 main.go:141] libmachine: Using SSH client type: native
	I0318 12:46:43.479815   11340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.129.141 22 <nil> <nil>}
	I0318 12:46:43.479815   11340 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710765999
	I0318 12:46:43.608338   11340 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 12:46:39 UTC 2024
	
	I0318 12:46:43.608338   11340 fix.go:236] clock set: Mon Mar 18 12:46:39 UTC 2024
	 (err=<nil>)
	I0318 12:46:43.608338   11340 start.go:83] releasing machines lock for "multinode-894400", held for 2m7.2556586s
	I0318 12:46:43.608338   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:46:45.613507   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:46:45.614033   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:45.614033   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:46:47.990353   11340 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 12:46:47.990353   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:47.994393   11340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 12:46:47.994608   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:46:48.005994   11340 ssh_runner.go:195] Run: cat /version.json
	I0318 12:46:48.005994   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:46:50.063238   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:46:50.063238   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:50.063238   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:46:50.069417   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:46:50.069417   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:50.069417   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:46:52.579786   11340 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 12:46:52.579786   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:52.579786   11340 sshutil.go:53] new ssh client: &{IP:172.30.129.141 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\id_rsa Username:docker}
	I0318 12:46:52.598732   11340 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 12:46:52.598732   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:46:52.599358   11340 sshutil.go:53] new ssh client: &{IP:172.30.129.141 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\id_rsa Username:docker}
	I0318 12:46:52.751593   11340 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0318 12:46:52.751706   11340 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7570891s)
	I0318 12:46:52.751706   11340 command_runner.go:130] > {"iso_version": "v1.32.1-1710520390-17991", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "3dd306d082737a9ddf335108b42c9fcb2ad84298"}
	I0318 12:46:52.751824   11340 ssh_runner.go:235] Completed: cat /version.json: (4.7456772s)
	I0318 12:46:52.768373   11340 ssh_runner.go:195] Run: systemctl --version
	I0318 12:46:52.778197   11340 command_runner.go:130] > systemd 252 (252)
	I0318 12:46:52.778364   11340 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0318 12:46:52.789834   11340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 12:46:52.798334   11340 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0318 12:46:52.798670   11340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 12:46:52.809191   11340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 12:46:52.838371   11340 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0318 12:46:52.838371   11340 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 12:46:52.838371   11340 start.go:494] detecting cgroup driver to use...
	I0318 12:46:52.838729   11340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:46:52.873034   11340 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0318 12:46:52.884274   11340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 12:46:52.915267   11340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 12:46:52.934743   11340 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 12:46:52.946702   11340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 12:46:52.975744   11340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 12:46:53.007145   11340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 12:46:53.036346   11340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 12:46:53.066676   11340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 12:46:53.095965   11340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 12:46:53.123885   11340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 12:46:53.141596   11340 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0318 12:46:53.152909   11340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 12:46:53.181149   11340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:46:53.397961   11340 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 12:46:53.425808   11340 start.go:494] detecting cgroup driver to use...
	I0318 12:46:53.437256   11340 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 12:46:53.455308   11340 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0318 12:46:53.455308   11340 command_runner.go:130] > [Unit]
	I0318 12:46:53.455308   11340 command_runner.go:130] > Description=Docker Application Container Engine
	I0318 12:46:53.455308   11340 command_runner.go:130] > Documentation=https://docs.docker.com
	I0318 12:46:53.455308   11340 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0318 12:46:53.455308   11340 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0318 12:46:53.455308   11340 command_runner.go:130] > StartLimitBurst=3
	I0318 12:46:53.455308   11340 command_runner.go:130] > StartLimitIntervalSec=60
	I0318 12:46:53.455308   11340 command_runner.go:130] > [Service]
	I0318 12:46:53.455308   11340 command_runner.go:130] > Type=notify
	I0318 12:46:53.455308   11340 command_runner.go:130] > Restart=on-failure
	I0318 12:46:53.455308   11340 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0318 12:46:53.455308   11340 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0318 12:46:53.455308   11340 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0318 12:46:53.455308   11340 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0318 12:46:53.455308   11340 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0318 12:46:53.455308   11340 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0318 12:46:53.455308   11340 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0318 12:46:53.455308   11340 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0318 12:46:53.455308   11340 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0318 12:46:53.456089   11340 command_runner.go:130] > ExecStart=
	I0318 12:46:53.456089   11340 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0318 12:46:53.456089   11340 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0318 12:46:53.456089   11340 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0318 12:46:53.456089   11340 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0318 12:46:53.456232   11340 command_runner.go:130] > LimitNOFILE=infinity
	I0318 12:46:53.456232   11340 command_runner.go:130] > LimitNPROC=infinity
	I0318 12:46:53.456232   11340 command_runner.go:130] > LimitCORE=infinity
	I0318 12:46:53.456232   11340 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0318 12:46:53.456232   11340 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0318 12:46:53.456347   11340 command_runner.go:130] > TasksMax=infinity
	I0318 12:46:53.456347   11340 command_runner.go:130] > TimeoutStartSec=0
	I0318 12:46:53.456347   11340 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0318 12:46:53.456347   11340 command_runner.go:130] > Delegate=yes
	I0318 12:46:53.456347   11340 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0318 12:46:53.456465   11340 command_runner.go:130] > KillMode=process
	I0318 12:46:53.456465   11340 command_runner.go:130] > [Install]
	I0318 12:46:53.456465   11340 command_runner.go:130] > WantedBy=multi-user.target
	I0318 12:46:53.470412   11340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:46:53.498909   11340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 12:46:53.538367   11340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:46:53.570315   11340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 12:46:53.603000   11340 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 12:46:53.664138   11340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 12:46:53.683548   11340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:46:53.713617   11340 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0318 12:46:53.725196   11340 ssh_runner.go:195] Run: which cri-dockerd
	I0318 12:46:53.730406   11340 command_runner.go:130] > /usr/bin/cri-dockerd
	I0318 12:46:53.752121   11340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 12:46:53.770815   11340 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 12:46:53.812194   11340 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 12:46:53.986618   11340 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 12:46:54.157754   11340 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 12:46:54.157754   11340 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 12:46:54.198394   11340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:46:54.386454   11340 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 12:46:56.856177   11340 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4697045s)
	I0318 12:46:56.869349   11340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 12:46:56.900734   11340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 12:46:56.932158   11340 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 12:46:57.112730   11340 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 12:46:57.294131   11340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:46:57.456518   11340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 12:46:57.494863   11340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 12:46:57.526791   11340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:46:57.698746   11340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 12:46:57.790797   11340 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 12:46:57.802967   11340 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 12:46:57.810261   11340 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0318 12:46:57.810361   11340 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0318 12:46:57.810361   11340 command_runner.go:130] > Device: 0,22	Inode: 879         Links: 1
	I0318 12:46:57.810361   11340 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0318 12:46:57.810361   11340 command_runner.go:130] > Access: 2024-03-18 12:46:57.723397267 +0000
	I0318 12:46:57.810466   11340 command_runner.go:130] > Modify: 2024-03-18 12:46:57.723397267 +0000
	I0318 12:46:57.810466   11340 command_runner.go:130] > Change: 2024-03-18 12:46:57.726397247 +0000
	I0318 12:46:57.810466   11340 command_runner.go:130] >  Birth: -
	I0318 12:46:57.810544   11340 start.go:562] Will wait 60s for crictl version
	I0318 12:46:57.822829   11340 ssh_runner.go:195] Run: which crictl
	I0318 12:46:57.828465   11340 command_runner.go:130] > /usr/bin/crictl
	I0318 12:46:57.839767   11340 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 12:46:57.904146   11340 command_runner.go:130] > Version:  0.1.0
	I0318 12:46:57.904585   11340 command_runner.go:130] > RuntimeName:  docker
	I0318 12:46:57.904585   11340 command_runner.go:130] > RuntimeVersion:  25.0.4
	I0318 12:46:57.904585   11340 command_runner.go:130] > RuntimeApiVersion:  v1
	I0318 12:46:57.904638   11340 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 12:46:57.913584   11340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 12:46:57.949823   11340 command_runner.go:130] > 25.0.4
	I0318 12:46:57.957293   11340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 12:46:57.985881   11340 command_runner.go:130] > 25.0.4
	I0318 12:46:57.991470   11340 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 12:46:57.991470   11340 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 12:46:57.995389   11340 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 12:46:57.995389   11340 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 12:46:57.995389   11340 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 12:46:57.995389   11340 ip.go:207] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:93:df:b5 Flags:up|broadcast|multicast|running}
	I0318 12:46:57.998952   11340 ip.go:210] interface addr: fe80::572b:6b1d:9130:e88b/64
	I0318 12:46:57.998952   11340 ip.go:210] interface addr: 172.30.128.1/20
	I0318 12:46:58.008761   11340 ssh_runner.go:195] Run: grep 172.30.128.1	host.minikube.internal$ /etc/hosts
	I0318 12:46:58.015795   11340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:46:58.034114   11340 kubeadm.go:877] updating cluster {Name:multinode-894400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-894400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.129.141 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 12:46:58.034316   11340 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 12:46:58.042499   11340 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 12:46:58.067328   11340 docker.go:685] Got preloaded images: 
	I0318 12:46:58.067328   11340 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0318 12:46:58.077941   11340 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 12:46:58.093067   11340 command_runner.go:139] > {"Repositories":{}}
	I0318 12:46:58.106451   11340 ssh_runner.go:195] Run: which lz4
	I0318 12:46:58.111190   11340 command_runner.go:130] > /usr/bin/lz4
	I0318 12:46:58.111190   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0318 12:46:58.121297   11340 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 12:46:58.127456   11340 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 12:46:58.128814   11340 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 12:46:58.129423   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0318 12:47:00.163617   11340 docker.go:649] duration metric: took 2.0512773s to copy over tarball
	I0318 12:47:00.178755   11340 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 12:47:10.621948   11340 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (10.443025s)
	I0318 12:47:10.622005   11340 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 12:47:10.690225   11340 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 12:47:10.708438   11340 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0318 12:47:10.708438   11340 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0318 12:47:10.750101   11340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:47:10.958422   11340 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 12:47:13.821124   11340 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.862681s)
	I0318 12:47:13.830497   11340 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 12:47:13.857555   11340 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0318 12:47:13.857606   11340 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0318 12:47:13.857606   11340 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0318 12:47:13.857661   11340 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0318 12:47:13.857661   11340 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0318 12:47:13.857661   11340 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0318 12:47:13.857661   11340 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0318 12:47:13.857705   11340 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 12:47:13.857757   11340 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 12:47:13.857757   11340 cache_images.go:84] Images are preloaded, skipping loading
	I0318 12:47:13.857854   11340 kubeadm.go:928] updating node { 172.30.129.141 8443 v1.28.4 docker true true} ...
	I0318 12:47:13.857881   11340 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-894400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.129.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-894400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 12:47:13.867105   11340 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 12:47:13.899842   11340 command_runner.go:130] > cgroupfs
	I0318 12:47:13.900766   11340 cni.go:84] Creating CNI manager for ""
	I0318 12:47:13.900766   11340 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 12:47:13.900766   11340 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 12:47:13.900766   11340 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.30.129.141 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-894400 NodeName:multinode-894400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.30.129.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.30.129.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 12:47:13.901514   11340 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.30.129.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-894400"
	  kubeletExtraArgs:
	    node-ip: 172.30.129.141
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.30.129.141"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 12:47:13.913904   11340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 12:47:13.930805   11340 command_runner.go:130] > kubeadm
	I0318 12:47:13.930830   11340 command_runner.go:130] > kubectl
	I0318 12:47:13.930830   11340 command_runner.go:130] > kubelet
	I0318 12:47:13.930830   11340 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 12:47:13.942904   11340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 12:47:13.962238   11340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0318 12:47:13.995788   11340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 12:47:14.025938   11340 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0318 12:47:14.071193   11340 ssh_runner.go:195] Run: grep 172.30.129.141	control-plane.minikube.internal$ /etc/hosts
	I0318 12:47:14.077652   11340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.129.141	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:47:14.109867   11340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:47:14.289020   11340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:47:14.317158   11340 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400 for IP: 172.30.129.141
	I0318 12:47:14.317237   11340 certs.go:194] generating shared ca certs ...
	I0318 12:47:14.317237   11340 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:47:14.317883   11340 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0318 12:47:14.318454   11340 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0318 12:47:14.318454   11340 certs.go:256] generating profile certs ...
	I0318 12:47:14.319095   11340 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\client.key
	I0318 12:47:14.319893   11340 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\client.crt with IP's: []
	I0318 12:47:14.742367   11340 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\client.crt ...
	I0318 12:47:14.742367   11340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\client.crt: {Name:mkebe46bb40e6589e282f1cd373fb6aba15e9575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:47:14.744382   11340 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\client.key ...
	I0318 12:47:14.744382   11340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\client.key: {Name:mk2ac9748fffa04f4f0357fe297e6b9a7cc9c4ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:47:14.745377   11340 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.key.4779b5b0
	I0318 12:47:14.746110   11340 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.crt.4779b5b0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.30.129.141]
	I0318 12:47:15.010844   11340 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.crt.4779b5b0 ...
	I0318 12:47:15.010844   11340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.crt.4779b5b0: {Name:mkc2d5e4722c4f577ec022e286cc9117a572e366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:47:15.012524   11340 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.key.4779b5b0 ...
	I0318 12:47:15.012524   11340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.key.4779b5b0: {Name:mkc91e1f6ff63efd75ac60ded3a90c47d4c103f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:47:15.013519   11340 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.crt.4779b5b0 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.crt
	I0318 12:47:15.022554   11340 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.key.4779b5b0 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.key
	I0318 12:47:15.024521   11340 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\proxy-client.key
	I0318 12:47:15.024521   11340 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\proxy-client.crt with IP's: []
	I0318 12:47:15.157620   11340 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\proxy-client.crt ...
	I0318 12:47:15.157620   11340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\proxy-client.crt: {Name:mk97287e145c7a1d9dbdfff416527cb7279e4d5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:47:15.159837   11340 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\proxy-client.key ...
	I0318 12:47:15.159837   11340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\proxy-client.key: {Name:mk3c8021ebc0b0c9f2dfd1536801b0ef5b378694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:47:15.160086   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 12:47:15.161272   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 12:47:15.161487   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 12:47:15.161487   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 12:47:15.161743   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 12:47:15.161943   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 12:47:15.162067   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 12:47:15.169030   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 12:47:15.169982   11340 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem (1338 bytes)
	W0318 12:47:15.170235   11340 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424_empty.pem, impossibly tiny 0 bytes
	I0318 12:47:15.170433   11340 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0318 12:47:15.170433   11340 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 12:47:15.170433   11340 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 12:47:15.171097   11340 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 12:47:15.171376   11340 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem (1708 bytes)
	I0318 12:47:15.171881   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /usr/share/ca-certificates/134242.pem
	I0318 12:47:15.172075   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:47:15.172245   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem -> /usr/share/ca-certificates/13424.pem
	I0318 12:47:15.172895   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 12:47:15.216521   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0318 12:47:15.258399   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 12:47:15.307500   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 12:47:15.358372   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 12:47:15.400848   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 12:47:15.442913   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 12:47:15.484096   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 12:47:15.527736   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /usr/share/ca-certificates/134242.pem (1708 bytes)
	I0318 12:47:15.570534   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 12:47:15.611278   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem --> /usr/share/ca-certificates/13424.pem (1338 bytes)
	I0318 12:47:15.652500   11340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 12:47:15.692602   11340 ssh_runner.go:195] Run: openssl version
	I0318 12:47:15.701560   11340 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0318 12:47:15.713448   11340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 12:47:15.742956   11340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:47:15.749759   11340 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 18 11:07 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:47:15.749759   11340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 11:07 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:47:15.759682   11340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:47:15.767375   11340 command_runner.go:130] > b5213941
	I0318 12:47:15.781007   11340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 12:47:15.811635   11340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13424.pem && ln -fs /usr/share/ca-certificates/13424.pem /etc/ssl/certs/13424.pem"
	I0318 12:47:15.843321   11340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13424.pem
	I0318 12:47:15.849031   11340 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 18 11:25 /usr/share/ca-certificates/13424.pem
	I0318 12:47:15.849156   11340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 11:25 /usr/share/ca-certificates/13424.pem
	I0318 12:47:15.861023   11340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13424.pem
	I0318 12:47:15.868667   11340 command_runner.go:130] > 51391683
	I0318 12:47:15.880380   11340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13424.pem /etc/ssl/certs/51391683.0"
	I0318 12:47:15.910284   11340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134242.pem && ln -fs /usr/share/ca-certificates/134242.pem /etc/ssl/certs/134242.pem"
	I0318 12:47:15.941426   11340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134242.pem
	I0318 12:47:15.947529   11340 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 18 11:25 /usr/share/ca-certificates/134242.pem
	I0318 12:47:15.947529   11340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 11:25 /usr/share/ca-certificates/134242.pem
	I0318 12:47:15.958514   11340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134242.pem
	I0318 12:47:15.967494   11340 command_runner.go:130] > 3ec20f2e
	I0318 12:47:15.979754   11340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134242.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 12:47:16.008241   11340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 12:47:16.013234   11340 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 12:47:16.014046   11340 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 12:47:16.014046   11340 kubeadm.go:391] StartCluster: {Name:multinode-894400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:multinode-894400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.129.141 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:47:16.021179   11340 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 12:47:16.056422   11340 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 12:47:16.074292   11340 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0318 12:47:16.074292   11340 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0318 12:47:16.074292   11340 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0318 12:47:16.086206   11340 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 12:47:16.114272   11340 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 12:47:16.129493   11340 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0318 12:47:16.129555   11340 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0318 12:47:16.129555   11340 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0318 12:47:16.129555   11340 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 12:47:16.129555   11340 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 12:47:16.129555   11340 kubeadm.go:156] found existing configuration files:
	
	I0318 12:47:16.142306   11340 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 12:47:16.156263   11340 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 12:47:16.156316   11340 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 12:47:16.169539   11340 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 12:47:16.196140   11340 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 12:47:16.213525   11340 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 12:47:16.213525   11340 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 12:47:16.226620   11340 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 12:47:16.255018   11340 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 12:47:16.269908   11340 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 12:47:16.269908   11340 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 12:47:16.281752   11340 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 12:47:16.312208   11340 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 12:47:16.327891   11340 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 12:47:16.327891   11340 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 12:47:16.340509   11340 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 12:47:16.356297   11340 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 12:47:16.626720   11340 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 12:47:16.626720   11340 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0318 12:47:16.626938   11340 command_runner.go:130] > [preflight] Running pre-flight checks
	I0318 12:47:16.626986   11340 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 12:47:16.812247   11340 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 12:47:16.812247   11340 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 12:47:16.812247   11340 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 12:47:16.812247   11340 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 12:47:16.812810   11340 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 12:47:16.812899   11340 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 12:47:17.182255   11340 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 12:47:17.185119   11340 out.go:204]   - Generating certificates and keys ...
	I0318 12:47:17.182255   11340 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 12:47:17.185436   11340 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0318 12:47:17.185436   11340 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 12:47:17.185436   11340 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0318 12:47:17.185436   11340 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 12:47:17.374104   11340 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 12:47:17.374104   11340 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 12:47:17.665804   11340 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 12:47:17.666353   11340 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0318 12:47:17.826846   11340 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 12:47:17.826846   11340 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0318 12:47:18.004775   11340 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 12:47:18.005032   11340 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0318 12:47:18.163359   11340 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0318 12:47:18.163359   11340 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 12:47:18.163643   11340 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-894400] and IPs [172.30.129.141 127.0.0.1 ::1]
	I0318 12:47:18.163643   11340 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-894400] and IPs [172.30.129.141 127.0.0.1 ::1]
	I0318 12:47:18.247081   11340 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 12:47:18.247081   11340 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0318 12:47:18.247081   11340 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-894400] and IPs [172.30.129.141 127.0.0.1 ::1]
	I0318 12:47:18.247081   11340 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-894400] and IPs [172.30.129.141 127.0.0.1 ::1]
	I0318 12:47:18.351876   11340 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 12:47:18.351876   11340 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 12:47:18.560564   11340 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 12:47:18.560613   11340 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 12:47:18.617252   11340 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 12:47:18.617281   11340 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0318 12:47:18.617281   11340 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 12:47:18.617281   11340 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 12:47:18.743352   11340 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 12:47:18.743352   11340 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 12:47:19.055559   11340 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 12:47:19.055618   11340 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 12:47:19.251481   11340 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 12:47:19.251481   11340 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 12:47:19.532177   11340 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 12:47:19.532177   11340 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 12:47:19.533708   11340 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 12:47:19.533708   11340 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 12:47:19.543031   11340 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 12:47:19.543089   11340 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 12:47:19.594607   11340 out.go:204]   - Booting up control plane ...
	I0318 12:47:19.595236   11340 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 12:47:19.595236   11340 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 12:47:19.595236   11340 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 12:47:19.595236   11340 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 12:47:19.595763   11340 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 12:47:19.595885   11340 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 12:47:19.595953   11340 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 12:47:19.595953   11340 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 12:47:19.595953   11340 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 12:47:19.595953   11340 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 12:47:19.596488   11340 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 12:47:19.596605   11340 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0318 12:47:19.769817   11340 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 12:47:19.769817   11340 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 12:47:26.776054   11340 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.005827 seconds
	I0318 12:47:26.776481   11340 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.005827 seconds
	I0318 12:47:26.776758   11340 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 12:47:26.776858   11340 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 12:47:26.812639   11340 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 12:47:26.812639   11340 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 12:47:27.360525   11340 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 12:47:27.360525   11340 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0318 12:47:27.361068   11340 kubeadm.go:309] [mark-control-plane] Marking the node multinode-894400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 12:47:27.361068   11340 command_runner.go:130] > [mark-control-plane] Marking the node multinode-894400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 12:47:27.879792   11340 command_runner.go:130] > [bootstrap-token] Using token: tg5zki.v6efz5o7xdz8lg1d
	I0318 12:47:27.879792   11340 kubeadm.go:309] [bootstrap-token] Using token: tg5zki.v6efz5o7xdz8lg1d
	I0318 12:47:27.884601   11340 out.go:204]   - Configuring RBAC rules ...
	I0318 12:47:27.884865   11340 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 12:47:27.884933   11340 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 12:47:27.898443   11340 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 12:47:27.898505   11340 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 12:47:27.913585   11340 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 12:47:27.913585   11340 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 12:47:27.923029   11340 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 12:47:27.923131   11340 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 12:47:27.930681   11340 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 12:47:27.930681   11340 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 12:47:27.939314   11340 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 12:47:27.939605   11340 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 12:47:27.971501   11340 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 12:47:27.971501   11340 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 12:47:28.318660   11340 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 12:47:28.318660   11340 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0318 12:47:28.379064   11340 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 12:47:28.379132   11340 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0318 12:47:28.380480   11340 kubeadm.go:309] 
	I0318 12:47:28.380902   11340 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0318 12:47:28.380902   11340 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 12:47:28.380965   11340 kubeadm.go:309] 
	I0318 12:47:28.381209   11340 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 12:47:28.381267   11340 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0318 12:47:28.381324   11340 kubeadm.go:309] 
	I0318 12:47:28.381379   11340 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 12:47:28.381433   11340 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0318 12:47:28.381549   11340 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 12:47:28.381608   11340 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 12:47:28.381769   11340 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 12:47:28.381769   11340 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 12:47:28.381769   11340 kubeadm.go:309] 
	I0318 12:47:28.382090   11340 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0318 12:47:28.382090   11340 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 12:47:28.382149   11340 kubeadm.go:309] 
	I0318 12:47:28.382312   11340 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 12:47:28.382312   11340 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 12:47:28.382421   11340 kubeadm.go:309] 
	I0318 12:47:28.382527   11340 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 12:47:28.382527   11340 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0318 12:47:28.382738   11340 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 12:47:28.382738   11340 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 12:47:28.382841   11340 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 12:47:28.382841   11340 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 12:47:28.382949   11340 kubeadm.go:309] 
	I0318 12:47:28.383052   11340 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 12:47:28.383052   11340 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0318 12:47:28.383263   11340 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0318 12:47:28.383263   11340 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 12:47:28.383263   11340 kubeadm.go:309] 
	I0318 12:47:28.383470   11340 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token tg5zki.v6efz5o7xdz8lg1d \
	I0318 12:47:28.383572   11340 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token tg5zki.v6efz5o7xdz8lg1d \
	I0318 12:47:28.383774   11340 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 \
	I0318 12:47:28.383774   11340 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 \
	I0318 12:47:28.383875   11340 command_runner.go:130] > 	--control-plane 
	I0318 12:47:28.383875   11340 kubeadm.go:309] 	--control-plane 
	I0318 12:47:28.383875   11340 kubeadm.go:309] 
	I0318 12:47:28.384080   11340 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0318 12:47:28.384080   11340 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 12:47:28.384080   11340 kubeadm.go:309] 
	I0318 12:47:28.384318   11340 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token tg5zki.v6efz5o7xdz8lg1d \
	I0318 12:47:28.384318   11340 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token tg5zki.v6efz5o7xdz8lg1d \
	I0318 12:47:28.384520   11340 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 
	I0318 12:47:28.384623   11340 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 
	I0318 12:47:28.388637   11340 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 12:47:28.388637   11340 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 12:47:28.388637   11340 cni.go:84] Creating CNI manager for ""
	I0318 12:47:28.388637   11340 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 12:47:28.391243   11340 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0318 12:47:28.407624   11340 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0318 12:47:28.417631   11340 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0318 12:47:28.417631   11340 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0318 12:47:28.417631   11340 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0318 12:47:28.417631   11340 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 12:47:28.417631   11340 command_runner.go:130] > Access: 2024-03-18 12:45:37.469109000 +0000
	I0318 12:47:28.417631   11340 command_runner.go:130] > Modify: 2024-03-15 22:00:10.000000000 +0000
	I0318 12:47:28.417631   11340 command_runner.go:130] > Change: 2024-03-18 12:45:29.058000000 +0000
	I0318 12:47:28.417631   11340 command_runner.go:130] >  Birth: -
	I0318 12:47:28.417631   11340 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0318 12:47:28.417631   11340 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0318 12:47:28.492796   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0318 12:47:29.871346   11340 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0318 12:47:29.871402   11340 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0318 12:47:29.871402   11340 command_runner.go:130] > serviceaccount/kindnet created
	I0318 12:47:29.871454   11340 command_runner.go:130] > daemonset.apps/kindnet created
	I0318 12:47:29.871454   11340 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.3786476s)
	I0318 12:47:29.871454   11340 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 12:47:29.885895   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-894400 minikube.k8s.io/updated_at=2024_03_18T12_47_29_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=multinode-894400 minikube.k8s.io/primary=true
	I0318 12:47:29.886060   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:29.893910   11340 command_runner.go:130] > -16
	I0318 12:47:29.894768   11340 ops.go:34] apiserver oom_adj: -16
	I0318 12:47:30.051148   11340 command_runner.go:130] > node/multinode-894400 labeled
	I0318 12:47:30.080688   11340 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0318 12:47:30.090706   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:30.195679   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:30.601767   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:30.727335   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:31.104187   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:31.218387   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:31.593546   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:31.712161   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:32.100301   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:32.223551   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:32.596027   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:32.707806   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:33.105748   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:33.214564   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:33.604932   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:33.717024   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:34.105504   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:34.209584   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:34.594133   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:34.718704   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:35.096345   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:35.199032   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:35.596725   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:35.702743   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:36.104049   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:36.206113   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:36.599581   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:36.718835   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:37.106178   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:37.210984   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:37.593308   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:37.714823   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:38.098483   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:38.225937   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:38.597877   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:38.734456   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:39.103641   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:39.221003   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:39.606016   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:39.716174   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:40.097569   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:40.217389   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:40.605463   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:40.726237   11340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0318 12:47:41.099368   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:47:41.221055   11340 command_runner.go:130] > NAME      SECRETS   AGE
	I0318 12:47:41.221148   11340 command_runner.go:130] > default   0         0s
	I0318 12:47:41.221148   11340 kubeadm.go:1107] duration metric: took 11.3496086s to wait for elevateKubeSystemPrivileges
	W0318 12:47:41.221291   11340 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 12:47:41.221330   11340 kubeadm.go:393] duration metric: took 25.2070559s to StartCluster
	I0318 12:47:41.221367   11340 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:47:41.221673   11340 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 12:47:41.223316   11340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:47:41.224434   11340 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 12:47:41.225080   11340 start.go:234] Will wait 6m0s for node &{Name: IP:172.30.129.141 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 12:47:41.228508   11340 out.go:177] * Verifying Kubernetes components...
	I0318 12:47:41.225151   11340 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 12:47:41.225882   11340 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:47:41.228829   11340 addons.go:69] Setting storage-provisioner=true in profile "multinode-894400"
	I0318 12:47:41.228829   11340 addons.go:69] Setting default-storageclass=true in profile "multinode-894400"
	I0318 12:47:41.232552   11340 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-894400"
	I0318 12:47:41.232552   11340 addons.go:234] Setting addon storage-provisioner=true in "multinode-894400"
	I0318 12:47:41.232552   11340 host.go:66] Checking if "multinode-894400" exists ...
	I0318 12:47:41.233554   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:47:41.233554   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:47:41.245568   11340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:47:41.507279   11340 command_runner.go:130] > apiVersion: v1
	I0318 12:47:41.507738   11340 command_runner.go:130] > data:
	I0318 12:47:41.507738   11340 command_runner.go:130] >   Corefile: |
	I0318 12:47:41.507738   11340 command_runner.go:130] >     .:53 {
	I0318 12:47:41.507738   11340 command_runner.go:130] >         errors
	I0318 12:47:41.507738   11340 command_runner.go:130] >         health {
	I0318 12:47:41.507738   11340 command_runner.go:130] >            lameduck 5s
	I0318 12:47:41.507738   11340 command_runner.go:130] >         }
	I0318 12:47:41.507738   11340 command_runner.go:130] >         ready
	I0318 12:47:41.507738   11340 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0318 12:47:41.507738   11340 command_runner.go:130] >            pods insecure
	I0318 12:47:41.507738   11340 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0318 12:47:41.507738   11340 command_runner.go:130] >            ttl 30
	I0318 12:47:41.507738   11340 command_runner.go:130] >         }
	I0318 12:47:41.507901   11340 command_runner.go:130] >         prometheus :9153
	I0318 12:47:41.507901   11340 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0318 12:47:41.507947   11340 command_runner.go:130] >            max_concurrent 1000
	I0318 12:47:41.507947   11340 command_runner.go:130] >         }
	I0318 12:47:41.507947   11340 command_runner.go:130] >         cache 30
	I0318 12:47:41.507947   11340 command_runner.go:130] >         loop
	I0318 12:47:41.507947   11340 command_runner.go:130] >         reload
	I0318 12:47:41.507947   11340 command_runner.go:130] >         loadbalance
	I0318 12:47:41.507947   11340 command_runner.go:130] >     }
	I0318 12:47:41.507947   11340 command_runner.go:130] > kind: ConfigMap
	I0318 12:47:41.507947   11340 command_runner.go:130] > metadata:
	I0318 12:47:41.507947   11340 command_runner.go:130] >   creationTimestamp: "2024-03-18T12:47:28Z"
	I0318 12:47:41.508049   11340 command_runner.go:130] >   name: coredns
	I0318 12:47:41.508049   11340 command_runner.go:130] >   namespace: kube-system
	I0318 12:47:41.508049   11340 command_runner.go:130] >   resourceVersion: "256"
	I0318 12:47:41.508049   11340 command_runner.go:130] >   uid: 770f95bf-f6ed-4ebf-9804-378c14e67e66
	I0318 12:47:41.513896   11340 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.30.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 12:47:41.688166   11340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:47:42.387702   11340 command_runner.go:130] > configmap/coredns replaced
	I0318 12:47:42.388740   11340 start.go:948] {"host.minikube.internal": 172.30.128.1} host record injected into CoreDNS's ConfigMap
	I0318 12:47:42.389519   11340 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 12:47:42.390221   11340 kapi.go:59] client config for multinode-894400: &rest.Config{Host:"https://172.30.129.141:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-894400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-894400\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e8b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 12:47:42.391713   11340 cert_rotation.go:137] Starting client certificate rotation controller
	I0318 12:47:42.391713   11340 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 12:47:42.392299   11340 round_trippers.go:463] GET https://172.30.129.141:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0318 12:47:42.392343   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:42.392385   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:42.392487   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:42.393244   11340 kapi.go:59] client config for multinode-894400: &rest.Config{Host:"https://172.30.129.141:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-894400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-894400\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e8b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 12:47:42.394143   11340 node_ready.go:35] waiting up to 6m0s for node "multinode-894400" to be "Ready" ...
	I0318 12:47:42.394143   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:42.394143   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:42.394143   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:42.394143   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:42.411403   11340 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0318 12:47:42.411882   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:42.411882   11340 round_trippers.go:580]     Audit-Id: 0d6781d5-2fa5-495f-8c11-becb4220089c
	I0318 12:47:42.411882   11340 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0318 12:47:42.411981   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:42.411882   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:42.411981   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:42.411981   11340 round_trippers.go:580]     Content-Length: 291
	I0318 12:47:42.411981   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:42 GMT
	I0318 12:47:42.411981   11340 round_trippers.go:580]     Audit-Id: 66afdedc-a748-4614-a656-1f00b02a950f
	I0318 12:47:42.411981   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:42.411981   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:42.411981   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:42 GMT
	I0318 12:47:42.411981   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:42.411981   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:42.411981   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:42.411981   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:42.411981   11340 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b95ee81f-b09d-45f9-b91e-607bfbb9e95d","resourceVersion":"380","creationTimestamp":"2024-03-18T12:47:28Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0318 12:47:42.411981   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"344","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:47:42.413355   11340 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b95ee81f-b09d-45f9-b91e-607bfbb9e95d","resourceVersion":"380","creationTimestamp":"2024-03-18T12:47:28Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0318 12:47:42.413355   11340 round_trippers.go:463] PUT https://172.30.129.141:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0318 12:47:42.413355   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:42.413355   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:42.413355   11340 round_trippers.go:473]     Content-Type: application/json
	I0318 12:47:42.413891   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:42.431799   11340 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0318 12:47:42.431799   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:42.431799   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:42.431799   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:42.431799   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:42.431799   11340 round_trippers.go:580]     Content-Length: 291
	I0318 12:47:42.431799   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:42 GMT
	I0318 12:47:42.431799   11340 round_trippers.go:580]     Audit-Id: 26cbd855-4492-4185-9bb5-e88fc3ce4eaf
	I0318 12:47:42.431799   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:42.431799   11340 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b95ee81f-b09d-45f9-b91e-607bfbb9e95d","resourceVersion":"382","creationTimestamp":"2024-03-18T12:47:28Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0318 12:47:42.895069   11340 round_trippers.go:463] GET https://172.30.129.141:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0318 12:47:42.895069   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:42.895069   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:42.895069   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:42.895069   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:42.895069   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:42.895069   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:42.895069   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:42.909091   11340 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0318 12:47:42.909091   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:42.909091   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:42 GMT
	I0318 12:47:42.909091   11340 round_trippers.go:580]     Audit-Id: 7d8c6b9a-4a4c-46e4-9a94-5f7adf4ec0ca
	I0318 12:47:42.909091   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:42.909091   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:42.909091   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:42.909091   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:42.909091   11340 round_trippers.go:580]     Content-Length: 291
	I0318 12:47:42.909997   11340 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b95ee81f-b09d-45f9-b91e-607bfbb9e95d","resourceVersion":"393","creationTimestamp":"2024-03-18T12:47:28Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0318 12:47:42.909997   11340 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0318 12:47:42.909997   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:42.910086   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:42.910086   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:42 GMT
	I0318 12:47:42.910086   11340 round_trippers.go:580]     Audit-Id: e7eed070-b68f-42c9-970c-801913054103
	I0318 12:47:42.910086   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:42.910086   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:42.910086   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:42.910199   11340 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-894400" context rescaled to 1 replicas
	I0318 12:47:42.911806   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"344","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:47:43.402914   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:43.402978   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:43.402978   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:43.402978   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:43.406940   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:47:43.406940   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:43.406940   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:43.406940   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:43.406940   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:43.406940   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:43 GMT
	I0318 12:47:43.406940   11340 round_trippers.go:580]     Audit-Id: 96a48de1-e932-456d-9d0c-2fd28ecbe196
	I0318 12:47:43.406940   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:43.406940   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"344","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:47:43.469618   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:47:43.469899   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:47:43.470959   11340 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 12:47:43.471586   11340 kapi.go:59] client config for multinode-894400: &rest.Config{Host:"https://172.30.129.141:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-894400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-894400\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e8b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 12:47:43.472861   11340 addons.go:234] Setting addon default-storageclass=true in "multinode-894400"
	I0318 12:47:43.472861   11340 host.go:66] Checking if "multinode-894400" exists ...
	I0318 12:47:43.472861   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:47:43.472861   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:47:43.476788   11340 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 12:47:43.474092   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:47:43.479663   11340 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 12:47:43.479772   11340 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 12:47:43.479884   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:47:43.908171   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:43.908251   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:43.908251   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:43.908251   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:43.912623   11340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:47:43.913068   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:43.913068   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:43.913068   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:43.913068   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:43 GMT
	I0318 12:47:43.913165   11340 round_trippers.go:580]     Audit-Id: ddd4eeac-4044-4bd2-adf5-2bc4c5c02776
	I0318 12:47:43.913165   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:43.913165   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:43.914221   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"344","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:47:44.398629   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:44.398629   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:44.398629   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:44.398629   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:44.407218   11340 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 12:47:44.408190   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:44.408190   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:44 GMT
	I0318 12:47:44.408190   11340 round_trippers.go:580]     Audit-Id: def7aa78-662a-4621-aeab-b2d38b0f8020
	I0318 12:47:44.408190   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:44.408190   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:44.408190   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:44.408251   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:44.408593   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"344","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:47:44.409587   11340 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 12:47:44.906455   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:44.906455   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:44.906455   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:44.906455   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:44.910469   11340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:47:44.910469   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:44.910469   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:44 GMT
	I0318 12:47:44.910469   11340 round_trippers.go:580]     Audit-Id: 47fe5601-8a61-4058-87c8-5d252aa9cf4d
	I0318 12:47:44.910469   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:44.910469   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:44.910469   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:44.910469   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:44.910469   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"344","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:47:45.400878   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:45.400978   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:45.400978   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:45.400978   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:45.405660   11340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:47:45.406443   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:45.406443   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:45.406443   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:45 GMT
	I0318 12:47:45.406443   11340 round_trippers.go:580]     Audit-Id: 58b0c2f7-a71d-43b4-a7dc-9c1cd53e03e5
	I0318 12:47:45.406443   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:45.406443   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:45.406443   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:45.406973   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"344","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:47:45.749998   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:47:45.749998   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:47:45.749998   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:47:45.777421   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:47:45.778153   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:47:45.778332   11340 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 12:47:45.778332   11340 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 12:47:45.778332   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:47:45.905345   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:45.905455   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:45.905455   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:45.905455   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:45.910283   11340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:47:45.910558   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:45.910558   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:45 GMT
	I0318 12:47:45.910640   11340 round_trippers.go:580]     Audit-Id: 2de4fa45-c983-4f1d-93e0-698e44820145
	I0318 12:47:45.910640   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:45.910640   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:45.910640   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:45.910640   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:45.910950   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"344","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:47:46.395753   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:46.395753   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:46.395753   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:46.395753   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:46.399640   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:47:46.399640   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:46.399640   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:46.399640   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:46.399640   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:46 GMT
	I0318 12:47:46.399640   11340 round_trippers.go:580]     Audit-Id: 5808e946-94c7-4a0f-ac3b-6f7f9349a2b7
	I0318 12:47:46.399640   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:46.399640   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:46.399640   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"344","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:47:46.901124   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:46.901203   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:46.901203   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:46.901203   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:46.904642   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:47:46.905032   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:46.905032   11340 round_trippers.go:580]     Audit-Id: 0831f622-6ece-4622-b94c-81b69b74b7f6
	I0318 12:47:46.905032   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:46.905032   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:46.905032   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:46.905032   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:46.905032   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:46 GMT
	I0318 12:47:46.905423   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"344","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:47:46.905980   11340 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 12:47:47.395833   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:47.395900   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:47.395900   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:47.395900   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:47.401866   11340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:47:47.401866   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:47.401866   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:47.401866   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:47.401866   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:47 GMT
	I0318 12:47:47.401866   11340 round_trippers.go:580]     Audit-Id: f50d5f9a-a481-423a-bfd2-827cc4149a98
	I0318 12:47:47.401866   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:47.401866   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:47.401866   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"344","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:47:47.902815   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:47.903116   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:47.903116   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:47.903116   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:47.906532   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:47:47.906532   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:47.906532   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:47.906955   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:47 GMT
	I0318 12:47:47.906955   11340 round_trippers.go:580]     Audit-Id: f66ff870-931f-46b3-ace0-290c9a2e7747
	I0318 12:47:47.906955   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:47.906955   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:47.906955   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:47.907152   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"344","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:47:48.000005   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:47:48.000578   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:47:48.000578   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:47:48.408568   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:48.408568   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:48.408568   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:48.408568   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:48.413634   11340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:47:48.413634   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:48.413634   11340 round_trippers.go:580]     Audit-Id: 3d5ffb3c-1da1-4455-91cc-14fdedb6b2c9
	I0318 12:47:48.413634   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:48.413634   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:48.413634   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:48.413634   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:48.413634   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:48 GMT
	I0318 12:47:48.414434   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"344","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:47:48.453003   11340 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 12:47:48.453104   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:47:48.453390   11340 sshutil.go:53] new ssh client: &{IP:172.30.129.141 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\id_rsa Username:docker}
	I0318 12:47:48.600885   11340 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 12:47:48.897680   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:48.897680   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:48.897680   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:48.897680   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:48.901275   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:47:48.901275   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:48.901275   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:48.901275   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:48.901275   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:48 GMT
	I0318 12:47:48.901275   11340 round_trippers.go:580]     Audit-Id: fadcb527-70dc-4507-811c-12f15c333266
	I0318 12:47:48.901275   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:48.901275   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:48.901275   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"344","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:47:49.358039   11340 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0318 12:47:49.358039   11340 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0318 12:47:49.358225   11340 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0318 12:47:49.358225   11340 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0318 12:47:49.358225   11340 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0318 12:47:49.358225   11340 command_runner.go:130] > pod/storage-provisioner created
	I0318 12:47:49.407592   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:49.407741   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:49.407741   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:49.407741   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:49.410328   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:47:49.411120   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:49.411120   11340 round_trippers.go:580]     Audit-Id: 742f9f45-d6c6-4d9a-96ea-85fe3920abff
	I0318 12:47:49.411120   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:49.411120   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:49.411120   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:49.411120   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:49.411120   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:49 GMT
	I0318 12:47:49.411382   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"344","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:47:49.411684   11340 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 12:47:49.901693   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:49.901693   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:49.901693   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:49.901693   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:49.905679   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:47:49.905679   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:49.905765   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:49.905765   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:49.905765   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:49 GMT
	I0318 12:47:49.905765   11340 round_trippers.go:580]     Audit-Id: b8241874-0c94-4eee-b97a-b44ff5b3fe1a
	I0318 12:47:49.905765   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:49.905765   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:49.905844   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"344","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:47:50.408058   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:50.408130   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:50.408130   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:50.408130   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:50.411798   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:47:50.411910   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:50.411910   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:50 GMT
	I0318 12:47:50.411910   11340 round_trippers.go:580]     Audit-Id: 3bd76936-9bd7-47bc-b9c2-21c8f3644baf
	I0318 12:47:50.412016   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:50.412016   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:50.412016   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:50.412016   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:50.412371   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"344","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:47:50.542499   11340 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 12:47:50.542807   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:47:50.542956   11340 sshutil.go:53] new ssh client: &{IP:172.30.129.141 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\id_rsa Username:docker}
	I0318 12:47:50.669806   11340 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 12:47:50.904572   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:50.904572   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:50.904572   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:50.904572   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:50.907611   11340 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0318 12:47:50.907822   11340 round_trippers.go:463] GET https://172.30.129.141:8443/apis/storage.k8s.io/v1/storageclasses
	I0318 12:47:50.907861   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:50.907897   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:50.907897   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:50.911984   11340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:47:50.911984   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:50.911984   11340 round_trippers.go:580]     Audit-Id: abb71f9d-baf1-40fa-9e29-c06107e1a415
	I0318 12:47:50.911984   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:50.911984   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:50.911984   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:50.911984   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:50.912699   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:50 GMT
	I0318 12:47:50.912873   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"344","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:47:50.913113   11340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:47:50.913113   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:50.913113   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:50 GMT
	I0318 12:47:50.913113   11340 round_trippers.go:580]     Audit-Id: 2aed5679-3816-45ef-813a-31e735577153
	I0318 12:47:50.913113   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:50.913113   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:50.913113   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:50.913113   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:50.913113   11340 round_trippers.go:580]     Content-Length: 1273
	I0318 12:47:50.913113   11340 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"416"},"items":[{"metadata":{"name":"standard","uid":"c9f381c7-d04a-421d-9f37-07f926545836","resourceVersion":"416","creationTimestamp":"2024-03-18T12:47:50Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-18T12:47:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0318 12:47:50.914044   11340 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c9f381c7-d04a-421d-9f37-07f926545836","resourceVersion":"416","creationTimestamp":"2024-03-18T12:47:50Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-18T12:47:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0318 12:47:50.914142   11340 round_trippers.go:463] PUT https://172.30.129.141:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0318 12:47:50.914142   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:50.914142   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:50.914142   11340 round_trippers.go:473]     Content-Type: application/json
	I0318 12:47:50.914142   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:50.916640   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:47:50.916640   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:50.916640   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:50.917701   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:50.917701   11340 round_trippers.go:580]     Content-Length: 1220
	I0318 12:47:50.917701   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:50 GMT
	I0318 12:47:50.917701   11340 round_trippers.go:580]     Audit-Id: fc38b140-cb1f-4fc3-ad40-4f51e11e62a3
	I0318 12:47:50.917701   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:50.917745   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:50.917915   11340 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c9f381c7-d04a-421d-9f37-07f926545836","resourceVersion":"416","creationTimestamp":"2024-03-18T12:47:50Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-18T12:47:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0318 12:47:50.920198   11340 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0318 12:47:50.925935   11340 addons.go:505] duration metric: took 9.700783s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0318 12:47:51.394407   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:51.394407   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:51.394407   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:51.394407   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:51.397898   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:47:51.397898   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:51.397898   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:51.397898   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:51.397898   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:51.397898   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:51 GMT
	I0318 12:47:51.397898   11340 round_trippers.go:580]     Audit-Id: c197a0ef-78d3-4daf-96ee-a391395c4162
	I0318 12:47:51.397898   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:51.398924   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"344","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:47:51.908731   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:51.908731   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:51.908731   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:51.908731   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:51.911559   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:47:51.911559   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:51.911559   11340 round_trippers.go:580]     Audit-Id: a6e5bc94-8060-49f7-aa49-e0371f1088bc
	I0318 12:47:51.911559   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:51.911559   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:51.911559   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:51.911559   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:51.912599   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:51 GMT
	I0318 12:47:51.912794   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"344","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:47:51.913021   11340 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 12:47:52.406542   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:52.406542   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:52.406542   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:52.406542   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:52.410251   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:47:52.410251   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:52.410251   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:52.410251   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:52.410251   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:52.410251   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:52 GMT
	I0318 12:47:52.410251   11340 round_trippers.go:580]     Audit-Id: 9ac00b0b-4d20-436a-bd90-82dc51014053
	I0318 12:47:52.410251   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:52.413208   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"344","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0318 12:47:52.906060   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:52.906060   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:52.906060   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:52.906060   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:52.914125   11340 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 12:47:52.915107   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:52.915107   11340 round_trippers.go:580]     Audit-Id: afc81e10-58cc-4329-b845-6bee3efaafc0
	I0318 12:47:52.915107   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:52.915107   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:52.915107   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:52.915107   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:52.915107   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:52 GMT
	I0318 12:47:52.915341   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"418","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4855 chars]
	I0318 12:47:52.915619   11340 node_ready.go:49] node "multinode-894400" has status "Ready":"True"
	I0318 12:47:52.915619   11340 node_ready.go:38] duration metric: took 10.5213974s for node "multinode-894400" to be "Ready" ...
	I0318 12:47:52.915619   11340 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:47:52.915619   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods
	I0318 12:47:52.915619   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:52.915619   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:52.915619   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:52.923998   11340 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 12:47:52.924066   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:52.924066   11340 round_trippers.go:580]     Audit-Id: daea5a6a-55ea-4ab3-9c8a-6a092a3a553d
	I0318 12:47:52.924066   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:52.924066   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:52.924066   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:52.924066   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:52.924066   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:52 GMT
	I0318 12:47:52.926636   11340 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"418"},"items":[{"metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"377","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51411 chars]
	I0318 12:47:52.930857   11340 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-456tm" in "kube-system" namespace to be "Ready" ...
	I0318 12:47:52.930857   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 12:47:52.930857   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:52.930857   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:52.930857   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:52.944259   11340 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0318 12:47:52.944636   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:52.944636   11340 round_trippers.go:580]     Audit-Id: da235f64-0550-4f76-b8f0-0859671d7557
	I0318 12:47:52.944636   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:52.944636   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:52.944694   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:52.944694   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:52.944694   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:52 GMT
	I0318 12:47:52.944694   11340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"377","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 4943 chars]
	I0318 12:47:53.440335   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 12:47:53.440571   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:53.440692   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:53.440692   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:53.445215   11340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:47:53.445215   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:53.445215   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:53 GMT
	I0318 12:47:53.445215   11340 round_trippers.go:580]     Audit-Id: 8d951354-f168-4cd7-ab2c-f633ddf2b505
	I0318 12:47:53.445215   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:53.445215   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:53.445215   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:53.445215   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:53.446228   11340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"424","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0318 12:47:53.447026   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:53.447147   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:53.447147   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:53.447147   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:53.449332   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:47:53.449332   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:53.449332   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:53.449332   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:53.449332   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:53.450359   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:53.450359   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:53 GMT
	I0318 12:47:53.450359   11340 round_trippers.go:580]     Audit-Id: 5a6a48a9-c210-448e-9bc3-19fa8e68d245
	I0318 12:47:53.450716   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"419","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:47:53.932395   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 12:47:53.932571   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:53.932625   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:53.932625   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:53.939062   11340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:47:53.939062   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:53.939062   11340 round_trippers.go:580]     Audit-Id: 7e8a6b87-a454-4073-a75d-444eac09018e
	I0318 12:47:53.939062   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:53.939062   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:53.939062   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:53.939858   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:53.939858   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:53 GMT
	I0318 12:47:53.940554   11340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"424","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0318 12:47:53.941422   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:53.941481   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:53.941530   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:53.941530   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:53.943675   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:47:53.943675   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:53.944449   11340 round_trippers.go:580]     Audit-Id: aa1f256f-ba2d-4db0-b8c0-81e7c6eb32ce
	I0318 12:47:53.944449   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:53.944449   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:53.944449   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:53.944449   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:53.944449   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:53 GMT
	I0318 12:47:53.944544   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"419","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:47:54.438574   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 12:47:54.438574   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:54.438663   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:54.438663   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:54.441730   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:47:54.441730   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:54.441730   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:54 GMT
	I0318 12:47:54.441730   11340 round_trippers.go:580]     Audit-Id: 4a22633c-5696-4963-af47-bca329da9361
	I0318 12:47:54.441730   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:54.441730   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:54.441730   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:54.441730   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:54.441730   11340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"434","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6513 chars]
	I0318 12:47:54.443259   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:54.443259   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:54.443378   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:54.443378   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:54.446521   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:47:54.446521   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:54.446521   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:54.446521   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:54.446521   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:54.446521   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:54.446521   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:54 GMT
	I0318 12:47:54.446521   11340 round_trippers.go:580]     Audit-Id: 73f059b6-79c6-45c2-bced-0cb13447a9a9
	I0318 12:47:54.446521   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"419","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:47:54.939396   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 12:47:54.939396   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:54.939396   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:54.939396   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:54.943007   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:47:54.943329   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:54.943329   11340 round_trippers.go:580]     Audit-Id: 52355184-35ce-49d5-b8a6-1c4af5d9f1f6
	I0318 12:47:54.943329   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:54.943329   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:54.943329   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:54.943329   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:54.943329   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:54 GMT
	I0318 12:47:54.943680   11340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"434","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6513 chars]
	I0318 12:47:54.944481   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:54.944557   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:54.944557   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:54.944557   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:54.947926   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:47:54.947926   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:54.947926   11340 round_trippers.go:580]     Audit-Id: 95be5410-f0c9-4bf5-bf48-4bc9329d6e8d
	I0318 12:47:54.947926   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:54.947926   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:54.947926   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:54.947926   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:54.947926   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:54 GMT
	I0318 12:47:54.947926   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"419","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:47:54.948720   11340 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 12:47:55.444697   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 12:47:55.444697   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:55.444697   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:55.444851   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:55.447101   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:47:55.448140   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:55.448140   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:55.448140   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:55.448140   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:55.448140   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:55.448215   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:55 GMT
	I0318 12:47:55.448215   11340 round_trippers.go:580]     Audit-Id: 42f0f700-6fee-4970-a430-402e12090bc2
	I0318 12:47:55.448269   11340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"439","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I0318 12:47:55.449193   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:55.449263   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:55.449263   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:55.449263   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:55.451547   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:47:55.452453   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:55.452453   11340 round_trippers.go:580]     Audit-Id: 07424851-2263-4ff5-9fc8-2d2d8431b4ed
	I0318 12:47:55.452453   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:55.452453   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:55.452453   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:55.452578   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:55.452603   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:55 GMT
	I0318 12:47:55.452743   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"419","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:47:55.452916   11340 pod_ready.go:92] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"True"
	I0318 12:47:55.452916   11340 pod_ready.go:81] duration metric: took 2.5220407s for pod "coredns-5dd5756b68-456tm" in "kube-system" namespace to be "Ready" ...
	I0318 12:47:55.452916   11340 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 12:47:55.453771   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-894400
	I0318 12:47:55.453829   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:55.453829   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:55.453859   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:55.457531   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:47:55.457531   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:55.457531   11340 round_trippers.go:580]     Audit-Id: f0c36f68-0b54-4772-aaa7-2533ff76afb2
	I0318 12:47:55.457531   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:55.457531   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:55.457531   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:55.457531   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:55.457531   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:55 GMT
	I0318 12:47:55.457531   11340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-894400","namespace":"kube-system","uid":"672a85d9-7526-4870-a33a-eac509ef3c3f","resourceVersion":"293","creationTimestamp":"2024-03-18T12:47:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.141:2379","kubernetes.io/config.hash":"c396fd459c503d2e9464c73cc841d3d8","kubernetes.io/config.mirror":"c396fd459c503d2e9464c73cc841d3d8","kubernetes.io/config.seen":"2024-03-18T12:47:20.228465690Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I0318 12:47:55.458263   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:55.458263   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:55.458263   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:55.458263   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:55.463410   11340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:47:55.463410   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:55.463410   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:55.463410   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:55.463410   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:55.463410   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:55 GMT
	I0318 12:47:55.463410   11340 round_trippers.go:580]     Audit-Id: 17a8f66b-0607-4e36-a191-b5bab4700175
	I0318 12:47:55.463410   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:55.463410   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"419","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:47:55.464058   11340 pod_ready.go:92] pod "etcd-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 12:47:55.464058   11340 pod_ready.go:81] duration metric: took 11.1418ms for pod "etcd-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 12:47:55.464058   11340 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 12:47:55.464058   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-894400
	I0318 12:47:55.464058   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:55.464058   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:55.464058   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:55.466970   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:47:55.466970   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:55.466970   11340 round_trippers.go:580]     Audit-Id: f844910a-3d4e-4659-8dcf-63706e301011
	I0318 12:47:55.466970   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:55.466970   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:55.466970   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:55.466970   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:55.466970   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:55 GMT
	I0318 12:47:55.466970   11340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-894400","namespace":"kube-system","uid":"62aca0ea-36b0-4841-9616-61448f45e04a","resourceVersion":"314","creationTimestamp":"2024-03-18T12:47:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.30.129.141:8443","kubernetes.io/config.hash":"decc1d942b4d81359bb79c0349ffe9bb","kubernetes.io/config.mirror":"decc1d942b4d81359bb79c0349ffe9bb","kubernetes.io/config.seen":"2024-03-18T12:47:20.228466989Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I0318 12:47:55.468483   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:55.468510   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:55.468510   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:55.468555   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:55.469798   11340 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 12:47:55.470833   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:55.470833   11340 round_trippers.go:580]     Audit-Id: abcb5deb-fec5-4c95-afd8-975e41bf26b9
	I0318 12:47:55.470833   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:55.470833   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:55.470833   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:55.470833   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:55.470833   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:55 GMT
	I0318 12:47:55.470833   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"419","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:47:55.470833   11340 pod_ready.go:92] pod "kube-apiserver-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 12:47:55.470833   11340 pod_ready.go:81] duration metric: took 6.7749ms for pod "kube-apiserver-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 12:47:55.470833   11340 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 12:47:55.471471   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-894400
	I0318 12:47:55.471596   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:55.471596   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:55.471643   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:55.474752   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:47:55.474752   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:55.474752   11340 round_trippers.go:580]     Audit-Id: ed22cc9c-7171-41d5-9253-83ebf5e24fff
	I0318 12:47:55.475057   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:55.475057   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:55.475057   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:55.475057   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:55.475057   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:55 GMT
	I0318 12:47:55.475263   11340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-894400","namespace":"kube-system","uid":"4ad5fc15-53ba-4ebb-9a63-b8572cd9c834","resourceVersion":"295","creationTimestamp":"2024-03-18T12:47:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d340aced56ba169ecac1e3ac58ad57fe","kubernetes.io/config.mirror":"d340aced56ba169ecac1e3ac58ad57fe","kubernetes.io/config.seen":"2024-03-18T12:47:20.228444892Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I0318 12:47:55.475930   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:55.475930   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:55.475930   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:55.475930   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:55.478969   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:47:55.478969   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:55.478969   11340 round_trippers.go:580]     Audit-Id: fc7fc90e-e619-4d04-adaa-4930a3c3a81a
	I0318 12:47:55.478969   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:55.478969   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:55.478969   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:55.479163   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:55.479163   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:55 GMT
	I0318 12:47:55.479371   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"419","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:47:55.479784   11340 pod_ready.go:92] pod "kube-controller-manager-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 12:47:55.479816   11340 pod_ready.go:81] duration metric: took 8.9514ms for pod "kube-controller-manager-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 12:47:55.479816   11340 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mc5tv" in "kube-system" namespace to be "Ready" ...
	I0318 12:47:55.479856   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mc5tv
	I0318 12:47:55.480002   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:55.480002   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:55.480002   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:55.482412   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:47:55.482412   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:55.482412   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:55.482412   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:55.482412   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:55 GMT
	I0318 12:47:55.482412   11340 round_trippers.go:580]     Audit-Id: 7122a400-2101-4523-95a0-676ea679e909
	I0318 12:47:55.482412   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:55.482412   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:55.483142   11340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mc5tv","generateName":"kube-proxy-","namespace":"kube-system","uid":"0afe25f8-cbd6-412b-8698-7b547d1d49ca","resourceVersion":"398","creationTimestamp":"2024-03-18T12:47:41Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"da714af0-88aa-4fc4-b73d-4de837f06114","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da714af0-88aa-4fc4-b73d-4de837f06114\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0318 12:47:55.483671   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:55.483710   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:55.483710   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:55.483710   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:55.486327   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:47:55.486327   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:55.486327   11340 round_trippers.go:580]     Audit-Id: 5ad11955-371e-4c1e-bfb7-fdc7e0089f8f
	I0318 12:47:55.486327   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:55.486327   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:55.486327   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:55.486327   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:55.486603   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:55 GMT
	I0318 12:47:55.487104   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"419","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:47:55.487495   11340 pod_ready.go:92] pod "kube-proxy-mc5tv" in "kube-system" namespace has status "Ready":"True"
	I0318 12:47:55.487495   11340 pod_ready.go:81] duration metric: took 7.6788ms for pod "kube-proxy-mc5tv" in "kube-system" namespace to be "Ready" ...
	I0318 12:47:55.487495   11340 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 12:47:55.648236   11340 request.go:629] Waited for 160.5645ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-894400
	I0318 12:47:55.648514   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-894400
	I0318 12:47:55.648514   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:55.648514   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:55.648514   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:55.655226   11340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:47:55.655226   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:55.655226   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:55.655226   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:55 GMT
	I0318 12:47:55.655226   11340 round_trippers.go:580]     Audit-Id: 8b556b74-4fa5-481a-a5da-d8638241d80d
	I0318 12:47:55.655226   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:55.655226   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:55.655226   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:55.655770   11340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-894400","namespace":"kube-system","uid":"f47703ce-5a82-466e-ac8e-ef6b8cc07e6c","resourceVersion":"320","creationTimestamp":"2024-03-18T12:47:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1c745e9b917877b1ff3c90ed02e9a79a","kubernetes.io/config.mirror":"1c745e9b917877b1ff3c90ed02e9a79a","kubernetes.io/config.seen":"2024-03-18T12:47:28.428225123Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I0318 12:47:55.850978   11340 request.go:629] Waited for 194.72ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:55.850978   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:47:55.851140   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:55.851140   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:55.851140   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:55.854421   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:47:55.855432   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:55.855468   11340 round_trippers.go:580]     Audit-Id: 2308f6a6-c71b-43e0-a665-697248e1f35c
	I0318 12:47:55.855468   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:55.855468   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:55.855468   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:55.855468   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:55.855468   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:55 GMT
	I0318 12:47:55.855652   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"419","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0318 12:47:55.856001   11340 pod_ready.go:92] pod "kube-scheduler-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 12:47:55.856332   11340 pod_ready.go:81] duration metric: took 368.8339ms for pod "kube-scheduler-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 12:47:55.856332   11340 pod_ready.go:38] duration metric: took 2.9406908s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:47:55.856332   11340 api_server.go:52] waiting for apiserver process to appear ...
	I0318 12:47:55.867306   11340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:47:55.894248   11340 command_runner.go:130] > 2193
	I0318 12:47:55.894303   11340 api_server.go:72] duration metric: took 14.6690421s to wait for apiserver process to appear ...
	I0318 12:47:55.894303   11340 api_server.go:88] waiting for apiserver healthz status ...
	I0318 12:47:55.894349   11340 api_server.go:253] Checking apiserver healthz at https://172.30.129.141:8443/healthz ...
	I0318 12:47:55.902360   11340 api_server.go:279] https://172.30.129.141:8443/healthz returned 200:
	ok
	I0318 12:47:55.902614   11340 round_trippers.go:463] GET https://172.30.129.141:8443/version
	I0318 12:47:55.902614   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:55.902614   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:55.902614   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:55.904221   11340 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 12:47:55.904221   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:55.904221   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:55.904221   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:55.904221   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:55.904221   11340 round_trippers.go:580]     Content-Length: 264
	I0318 12:47:55.904221   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:55 GMT
	I0318 12:47:55.904434   11340 round_trippers.go:580]     Audit-Id: 01313015-69c5-4c93-b959-cedea83219ca
	I0318 12:47:55.904434   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:55.904532   11340 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0318 12:47:55.904755   11340 api_server.go:141] control plane version: v1.28.4
	I0318 12:47:55.904822   11340 api_server.go:131] duration metric: took 10.4517ms to wait for apiserver health ...
	I0318 12:47:55.904822   11340 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 12:47:56.052116   11340 request.go:629] Waited for 147.1902ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods
	I0318 12:47:56.052241   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods
	I0318 12:47:56.052241   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:56.052241   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:56.052241   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:56.056877   11340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:47:56.056877   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:56.056877   11340 round_trippers.go:580]     Audit-Id: c011e08a-9ee8-4c1c-9051-ddf970fe701a
	I0318 12:47:56.056877   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:56.056877   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:56.056877   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:56.056877   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:56.057338   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:56 GMT
	I0318 12:47:56.058358   11340 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"444"},"items":[{"metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"439","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54128 chars]
	I0318 12:47:56.060668   11340 system_pods.go:59] 8 kube-system pods found
	I0318 12:47:56.060725   11340 system_pods.go:61] "coredns-5dd5756b68-456tm" [1a018c55-846b-4dc2-992c-dc8fd82a6c67] Running
	I0318 12:47:56.060725   11340 system_pods.go:61] "etcd-multinode-894400" [672a85d9-7526-4870-a33a-eac509ef3c3f] Running
	I0318 12:47:56.060794   11340 system_pods.go:61] "kindnet-hhsxh" [0161d239-2d85-4246-b2fa-6c7374f2ecd6] Running
	I0318 12:47:56.060794   11340 system_pods.go:61] "kube-apiserver-multinode-894400" [62aca0ea-36b0-4841-9616-61448f45e04a] Running
	I0318 12:47:56.060794   11340 system_pods.go:61] "kube-controller-manager-multinode-894400" [4ad5fc15-53ba-4ebb-9a63-b8572cd9c834] Running
	I0318 12:47:56.060794   11340 system_pods.go:61] "kube-proxy-mc5tv" [0afe25f8-cbd6-412b-8698-7b547d1d49ca] Running
	I0318 12:47:56.060794   11340 system_pods.go:61] "kube-scheduler-multinode-894400" [f47703ce-5a82-466e-ac8e-ef6b8cc07e6c] Running
	I0318 12:47:56.060794   11340 system_pods.go:61] "storage-provisioner" [219bafbc-d807-44cf-9927-e4957f36ad70] Running
	I0318 12:47:56.060794   11340 system_pods.go:74] duration metric: took 155.9704ms to wait for pod list to return data ...
	I0318 12:47:56.060794   11340 default_sa.go:34] waiting for default service account to be created ...
	I0318 12:47:56.253918   11340 request.go:629] Waited for 193.1225ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.141:8443/api/v1/namespaces/default/serviceaccounts
	I0318 12:47:56.254259   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/default/serviceaccounts
	I0318 12:47:56.257370   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:56.257472   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:56.257472   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:56.263349   11340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:47:56.263349   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:56.263349   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:56.263349   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:56.263349   11340 round_trippers.go:580]     Content-Length: 261
	I0318 12:47:56.263349   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:56 GMT
	I0318 12:47:56.263349   11340 round_trippers.go:580]     Audit-Id: 1b4dd316-c6e6-4f50-bcc5-b9253983abf3
	I0318 12:47:56.263349   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:56.263349   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:56.263349   11340 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"17315183-b28f-4dc0-9fbf-c6e55ed5b7f0","resourceVersion":"330","creationTimestamp":"2024-03-18T12:47:41Z"}}]}
	I0318 12:47:56.264063   11340 default_sa.go:45] found service account: "default"
	I0318 12:47:56.264063   11340 default_sa.go:55] duration metric: took 203.2675ms for default service account to be created ...
	I0318 12:47:56.264063   11340 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 12:47:56.455298   11340 request.go:629] Waited for 190.8379ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods
	I0318 12:47:56.455349   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods
	I0318 12:47:56.455349   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:56.455349   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:56.455444   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:56.462002   11340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:47:56.462002   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:56.462002   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:56.462002   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:56.462002   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:56.462002   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:56.462549   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:56 GMT
	I0318 12:47:56.462549   11340 round_trippers.go:580]     Audit-Id: bc1f5713-4cbb-44bd-8f76-139dad95c2d7
	I0318 12:47:56.463615   11340 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"439","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54128 chars]
	I0318 12:47:56.466152   11340 system_pods.go:86] 8 kube-system pods found
	I0318 12:47:56.466265   11340 system_pods.go:89] "coredns-5dd5756b68-456tm" [1a018c55-846b-4dc2-992c-dc8fd82a6c67] Running
	I0318 12:47:56.466265   11340 system_pods.go:89] "etcd-multinode-894400" [672a85d9-7526-4870-a33a-eac509ef3c3f] Running
	I0318 12:47:56.466265   11340 system_pods.go:89] "kindnet-hhsxh" [0161d239-2d85-4246-b2fa-6c7374f2ecd6] Running
	I0318 12:47:56.466265   11340 system_pods.go:89] "kube-apiserver-multinode-894400" [62aca0ea-36b0-4841-9616-61448f45e04a] Running
	I0318 12:47:56.466382   11340 system_pods.go:89] "kube-controller-manager-multinode-894400" [4ad5fc15-53ba-4ebb-9a63-b8572cd9c834] Running
	I0318 12:47:56.466382   11340 system_pods.go:89] "kube-proxy-mc5tv" [0afe25f8-cbd6-412b-8698-7b547d1d49ca] Running
	I0318 12:47:56.466419   11340 system_pods.go:89] "kube-scheduler-multinode-894400" [f47703ce-5a82-466e-ac8e-ef6b8cc07e6c] Running
	I0318 12:47:56.466419   11340 system_pods.go:89] "storage-provisioner" [219bafbc-d807-44cf-9927-e4957f36ad70] Running
	I0318 12:47:56.466419   11340 system_pods.go:126] duration metric: took 202.3541ms to wait for k8s-apps to be running ...
	I0318 12:47:56.466492   11340 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 12:47:56.477794   11340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:47:56.502297   11340 system_svc.go:56] duration metric: took 35.8043ms WaitForService to wait for kubelet
	I0318 12:47:56.502297   11340 kubeadm.go:576] duration metric: took 15.2770314s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:47:56.502450   11340 node_conditions.go:102] verifying NodePressure condition ...
	I0318 12:47:56.656991   11340 request.go:629] Waited for 154.4839ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.141:8443/api/v1/nodes
	I0318 12:47:56.657083   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes
	I0318 12:47:56.657083   11340 round_trippers.go:469] Request Headers:
	I0318 12:47:56.657083   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:47:56.657083   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:47:56.659810   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:47:56.659810   11340 round_trippers.go:577] Response Headers:
	I0318 12:47:56.659810   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:47:56.659810   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:47:56.660062   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:47:56.660062   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:47:56 GMT
	I0318 12:47:56.660062   11340 round_trippers.go:580]     Audit-Id: 8fdd4b22-4424-4677-b8fe-b47dcbbde889
	I0318 12:47:56.660062   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:47:56.660206   11340 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"419","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4836 chars]
	I0318 12:47:56.660820   11340 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:47:56.660935   11340 node_conditions.go:123] node cpu capacity is 2
	I0318 12:47:56.660935   11340 node_conditions.go:105] duration metric: took 158.4842ms to run NodePressure ...
	I0318 12:47:56.660935   11340 start.go:240] waiting for startup goroutines ...
	I0318 12:47:56.660990   11340 start.go:245] waiting for cluster config update ...
	I0318 12:47:56.660990   11340 start.go:254] writing updated cluster config ...
	I0318 12:47:56.665037   11340 out.go:177] 
	I0318 12:47:56.667861   11340 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:47:56.676510   11340 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:47:56.676510   11340 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\config.json ...
	I0318 12:47:56.684448   11340 out.go:177] * Starting "multinode-894400-m02" worker node in "multinode-894400" cluster
	I0318 12:47:56.687416   11340 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 12:47:56.687416   11340 cache.go:56] Caching tarball of preloaded images
	I0318 12:47:56.687665   11340 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 12:47:56.687665   11340 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 12:47:56.688225   11340 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\config.json ...
	I0318 12:47:56.689423   11340 start.go:360] acquireMachinesLock for multinode-894400-m02: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 12:47:56.690414   11340 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-894400-m02"
	I0318 12:47:56.690414   11340 start.go:93] Provisioning new machine with config: &{Name:multinode-894400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:multinode-894400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.129.141 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0318 12:47:56.690414   11340 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0318 12:47:56.693412   11340 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 12:47:56.694452   11340 start.go:159] libmachine.API.Create for "multinode-894400" (driver="hyperv")
	I0318 12:47:56.694452   11340 client.go:168] LocalClient.Create starting
	I0318 12:47:56.694601   11340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0318 12:47:56.694601   11340 main.go:141] libmachine: Decoding PEM data...
	I0318 12:47:56.695166   11340 main.go:141] libmachine: Parsing certificate...
	I0318 12:47:56.695308   11340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0318 12:47:56.695455   11340 main.go:141] libmachine: Decoding PEM data...
	I0318 12:47:56.695455   11340 main.go:141] libmachine: Parsing certificate...
	I0318 12:47:56.695617   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0318 12:47:58.543194   11340 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0318 12:47:58.543194   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:47:58.543194   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0318 12:48:00.242912   11340 main.go:141] libmachine: [stdout =====>] : False
	
	I0318 12:48:00.243546   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:00.243546   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 12:48:01.680298   11340 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 12:48:01.681319   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:01.681367   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 12:48:05.205533   11340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 12:48:05.205533   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:05.208036   11340 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 12:48:05.618836   11340 main.go:141] libmachine: Creating SSH key...
	I0318 12:48:05.711990   11340 main.go:141] libmachine: Creating VM...
	I0318 12:48:05.711990   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0318 12:48:08.559154   11340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0318 12:48:08.559744   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:08.559744   11340 main.go:141] libmachine: Using switch "Default Switch"
	I0318 12:48:08.559744   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0318 12:48:10.295059   11340 main.go:141] libmachine: [stdout =====>] : True
	
	I0318 12:48:10.295347   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:10.295457   11340 main.go:141] libmachine: Creating VHD
	I0318 12:48:10.295457   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0318 12:48:13.912118   11340 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D7122023-2A12-4052-B014-5B99CE43B5F1
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0318 12:48:13.912118   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:13.912400   11340 main.go:141] libmachine: Writing magic tar header
	I0318 12:48:13.912464   11340 main.go:141] libmachine: Writing SSH key tar header
	I0318 12:48:13.921725   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0318 12:48:17.022714   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:48:17.023552   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:17.023620   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\disk.vhd' -SizeBytes 20000MB
	I0318 12:48:19.518255   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:48:19.518808   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:19.518906   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-894400-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0318 12:48:23.065278   11340 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-894400-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0318 12:48:23.065278   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:23.065358   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-894400-m02 -DynamicMemoryEnabled $false
	I0318 12:48:25.246325   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:48:25.246805   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:25.246805   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-894400-m02 -Count 2
	I0318 12:48:27.378549   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:48:27.378549   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:27.378549   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-894400-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\boot2docker.iso'
	I0318 12:48:29.830628   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:48:29.830707   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:29.830779   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-894400-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\disk.vhd'
	I0318 12:48:32.366126   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:48:32.367037   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:32.367037   11340 main.go:141] libmachine: Starting VM...
	I0318 12:48:32.367175   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-894400-m02
	I0318 12:48:35.359668   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:48:35.359668   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:35.359668   11340 main.go:141] libmachine: Waiting for host to start...
	I0318 12:48:35.359668   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:48:37.533134   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:48:37.533134   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:37.533459   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:48:39.944241   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:48:39.944241   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:40.956070   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:48:43.097328   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:48:43.097328   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:43.097503   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:48:45.613701   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:48:45.613701   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:46.617067   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:48:48.750852   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:48:48.750852   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:48.750852   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:48:51.236254   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:48:51.237036   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:52.249075   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:48:54.409370   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:48:54.409370   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:54.409370   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:48:56.860041   11340 main.go:141] libmachine: [stdout =====>] : 
	I0318 12:48:56.860041   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:48:57.866434   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:49:00.022689   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:49:00.023100   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:00.023100   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:49:02.651484   11340 main.go:141] libmachine: [stdout =====>] : 172.30.140.66
	
	I0318 12:49:02.651582   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:02.651582   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:49:04.708982   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:49:04.708982   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:04.708982   11340 machine.go:94] provisionDockerMachine start ...
	I0318 12:49:04.708982   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:49:06.761684   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:49:06.761684   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:06.761684   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:49:09.207377   11340 main.go:141] libmachine: [stdout =====>] : 172.30.140.66
	
	I0318 12:49:09.207466   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:09.213366   11340 main.go:141] libmachine: Using SSH client type: native
	I0318 12:49:09.223872   11340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.140.66 22 <nil> <nil>}
	I0318 12:49:09.223872   11340 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 12:49:09.363638   11340 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 12:49:09.363638   11340 buildroot.go:166] provisioning hostname "multinode-894400-m02"
	I0318 12:49:09.364171   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:49:11.440670   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:49:11.440670   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:11.440670   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:49:13.957228   11340 main.go:141] libmachine: [stdout =====>] : 172.30.140.66
	
	I0318 12:49:13.957647   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:13.963265   11340 main.go:141] libmachine: Using SSH client type: native
	I0318 12:49:13.963932   11340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.140.66 22 <nil> <nil>}
	I0318 12:49:13.963932   11340 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-894400-m02 && echo "multinode-894400-m02" | sudo tee /etc/hostname
	I0318 12:49:14.126049   11340 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-894400-m02
	
	I0318 12:49:14.126160   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:49:16.249309   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:49:16.254860   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:16.255797   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:49:18.734778   11340 main.go:141] libmachine: [stdout =====>] : 172.30.140.66
	
	I0318 12:49:18.735201   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:18.740242   11340 main.go:141] libmachine: Using SSH client type: native
	I0318 12:49:18.740985   11340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.140.66 22 <nil> <nil>}
	I0318 12:49:18.740985   11340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-894400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-894400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-894400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 12:49:18.894893   11340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:49:18.894893   11340 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0318 12:49:18.894893   11340 buildroot.go:174] setting up certificates
	I0318 12:49:18.894893   11340 provision.go:84] configureAuth start
	I0318 12:49:18.894893   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:49:20.983479   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:49:20.983959   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:20.984045   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:49:23.427846   11340 main.go:141] libmachine: [stdout =====>] : 172.30.140.66
	
	I0318 12:49:23.428033   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:23.428119   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:49:25.520508   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:49:25.520872   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:25.520872   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:49:27.988990   11340 main.go:141] libmachine: [stdout =====>] : 172.30.140.66
	
	I0318 12:49:27.989761   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:27.989761   11340 provision.go:143] copyHostCerts
	I0318 12:49:27.989761   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0318 12:49:27.989761   11340 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0318 12:49:27.989761   11340 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0318 12:49:27.990481   11340 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 12:49:27.991734   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0318 12:49:27.991986   11340 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0318 12:49:27.991986   11340 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0318 12:49:27.991986   11340 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 12:49:27.992754   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0318 12:49:27.993364   11340 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0318 12:49:27.993364   11340 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0318 12:49:27.993364   11340 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 12:49:27.994706   11340 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-894400-m02 san=[127.0.0.1 172.30.140.66 localhost minikube multinode-894400-m02]
	I0318 12:49:28.381994   11340 provision.go:177] copyRemoteCerts
	I0318 12:49:28.393520   11340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 12:49:28.393520   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:49:30.454654   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:49:30.454716   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:30.454862   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:49:32.889825   11340 main.go:141] libmachine: [stdout =====>] : 172.30.140.66
	
	I0318 12:49:32.889825   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:32.889825   11340 sshutil.go:53] new ssh client: &{IP:172.30.140.66 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\id_rsa Username:docker}
	I0318 12:49:32.990582   11340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5964186s)
	I0318 12:49:32.990636   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 12:49:32.991076   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 12:49:33.033156   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 12:49:33.033671   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0318 12:49:33.080605   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 12:49:33.080758   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 12:49:33.129276   11340 provision.go:87] duration metric: took 14.2342762s to configureAuth
	I0318 12:49:33.129276   11340 buildroot.go:189] setting minikube options for container-runtime
	I0318 12:49:33.129875   11340 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:49:33.130187   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:49:35.162899   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:49:35.163780   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:35.163780   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:49:37.567566   11340 main.go:141] libmachine: [stdout =====>] : 172.30.140.66
	
	I0318 12:49:37.567566   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:37.573132   11340 main.go:141] libmachine: Using SSH client type: native
	I0318 12:49:37.573917   11340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.140.66 22 <nil> <nil>}
	I0318 12:49:37.573917   11340 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 12:49:37.715441   11340 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 12:49:37.715441   11340 buildroot.go:70] root file system type: tmpfs
	I0318 12:49:37.715441   11340 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 12:49:37.715441   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:49:39.709788   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:49:39.710533   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:39.710533   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:49:42.133583   11340 main.go:141] libmachine: [stdout =====>] : 172.30.140.66
	
	I0318 12:49:42.133583   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:42.139336   11340 main.go:141] libmachine: Using SSH client type: native
	I0318 12:49:42.139336   11340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.140.66 22 <nil> <nil>}
	I0318 12:49:42.139926   11340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.30.129.141"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 12:49:42.295869   11340 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.30.129.141
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 12:49:42.296034   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:49:44.299873   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:49:44.299873   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:44.299983   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:49:46.724448   11340 main.go:141] libmachine: [stdout =====>] : 172.30.140.66
	
	I0318 12:49:46.724448   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:46.731258   11340 main.go:141] libmachine: Using SSH client type: native
	I0318 12:49:46.731998   11340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.140.66 22 <nil> <nil>}
	I0318 12:49:46.731998   11340 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 12:49:48.815915   11340 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 12:49:48.815967   11340 machine.go:97] duration metric: took 44.1066542s to provisionDockerMachine
	I0318 12:49:48.816002   11340 client.go:171] duration metric: took 1m52.120674s to LocalClient.Create
	I0318 12:49:48.816002   11340 start.go:167] duration metric: took 1m52.1207091s to libmachine.API.Create "multinode-894400"
	I0318 12:49:48.816002   11340 start.go:293] postStartSetup for "multinode-894400-m02" (driver="hyperv")
	I0318 12:49:48.816002   11340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 12:49:48.830221   11340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 12:49:48.830221   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:49:50.927721   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:49:50.927721   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:50.928341   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:49:53.430250   11340 main.go:141] libmachine: [stdout =====>] : 172.30.140.66
	
	I0318 12:49:53.431334   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:53.431542   11340 sshutil.go:53] new ssh client: &{IP:172.30.140.66 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\id_rsa Username:docker}
	I0318 12:49:53.545314   11340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7150581s)
	I0318 12:49:53.557988   11340 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 12:49:53.566702   11340 command_runner.go:130] > NAME=Buildroot
	I0318 12:49:53.566796   11340 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0318 12:49:53.566796   11340 command_runner.go:130] > ID=buildroot
	I0318 12:49:53.566796   11340 command_runner.go:130] > VERSION_ID=2023.02.9
	I0318 12:49:53.566841   11340 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0318 12:49:53.566841   11340 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 12:49:53.566898   11340 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0318 12:49:53.567072   11340 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0318 12:49:53.568149   11340 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> 134242.pem in /etc/ssl/certs
	I0318 12:49:53.568198   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /etc/ssl/certs/134242.pem
	I0318 12:49:53.580174   11340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 12:49:53.598061   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /etc/ssl/certs/134242.pem (1708 bytes)
	I0318 12:49:53.644605   11340 start.go:296] duration metric: took 4.8285022s for postStartSetup
	I0318 12:49:53.647292   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:49:55.734905   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:49:55.734905   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:55.735023   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:49:58.242643   11340 main.go:141] libmachine: [stdout =====>] : 172.30.140.66
	
	I0318 12:49:58.243705   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:49:58.243857   11340 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\config.json ...
	I0318 12:49:58.246190   11340 start.go:128] duration metric: took 2m1.5548637s to createHost
	I0318 12:49:58.246190   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:50:00.332942   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:50:00.333007   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:50:00.333007   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:50:02.839271   11340 main.go:141] libmachine: [stdout =====>] : 172.30.140.66
	
	I0318 12:50:02.839271   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:50:02.845118   11340 main.go:141] libmachine: Using SSH client type: native
	I0318 12:50:02.845656   11340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.140.66 22 <nil> <nil>}
	I0318 12:50:02.845689   11340 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 12:50:02.986784   11340 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710766202.989729238
	
	I0318 12:50:02.986784   11340 fix.go:216] guest clock: 1710766202.989729238
	I0318 12:50:02.986784   11340 fix.go:229] Guest: 2024-03-18 12:50:02.989729238 +0000 UTC Remote: 2024-03-18 12:49:58.2461902 +0000 UTC m=+327.170955101 (delta=4.743539038s)
	I0318 12:50:02.986784   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:50:05.063130   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:50:05.063283   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:50:05.063329   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:50:07.562826   11340 main.go:141] libmachine: [stdout =====>] : 172.30.140.66
	
	I0318 12:50:07.562888   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:50:07.567422   11340 main.go:141] libmachine: Using SSH client type: native
	I0318 12:50:07.567548   11340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.140.66 22 <nil> <nil>}
	I0318 12:50:07.567548   11340 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710766202
	I0318 12:50:07.719483   11340 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 12:50:02 UTC 2024
	
	I0318 12:50:07.719555   11340 fix.go:236] clock set: Mon Mar 18 12:50:02 UTC 2024
	 (err=<nil>)
	I0318 12:50:07.719555   11340 start.go:83] releasing machines lock for "multinode-894400-m02", held for 2m11.0281577s
	I0318 12:50:07.719763   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:50:09.773026   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:50:09.773459   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:50:09.773554   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:50:12.306008   11340 main.go:141] libmachine: [stdout =====>] : 172.30.140.66
	
	I0318 12:50:12.306008   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:50:12.308855   11340 out.go:177] * Found network options:
	I0318 12:50:12.313113   11340 out.go:177]   - NO_PROXY=172.30.129.141
	W0318 12:50:12.316465   11340 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 12:50:12.319068   11340 out.go:177]   - NO_PROXY=172.30.129.141
	W0318 12:50:12.321662   11340 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 12:50:12.323054   11340 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 12:50:12.324479   11340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 12:50:12.325452   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:50:12.337522   11340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 12:50:12.337522   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 12:50:14.481711   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:50:14.481815   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:50:14.481989   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:50:14.481989   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:50:14.481989   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:50:14.481989   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 12:50:17.114078   11340 main.go:141] libmachine: [stdout =====>] : 172.30.140.66
	
	I0318 12:50:17.115141   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:50:17.115278   11340 sshutil.go:53] new ssh client: &{IP:172.30.140.66 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\id_rsa Username:docker}
	I0318 12:50:17.139196   11340 main.go:141] libmachine: [stdout =====>] : 172.30.140.66
	
	I0318 12:50:17.140218   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:50:17.140485   11340 sshutil.go:53] new ssh client: &{IP:172.30.140.66 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\id_rsa Username:docker}
	I0318 12:50:17.306319   11340 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0318 12:50:17.306319   11340 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9818031s)
	I0318 12:50:17.306319   11340 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0318 12:50:17.306522   11340 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9689084s)
	W0318 12:50:17.306601   11340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 12:50:17.320113   11340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 12:50:17.348107   11340 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0318 12:50:17.348605   11340 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 12:50:17.348605   11340 start.go:494] detecting cgroup driver to use...
	I0318 12:50:17.348605   11340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:50:17.382316   11340 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0318 12:50:17.394756   11340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 12:50:17.425683   11340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 12:50:17.443152   11340 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 12:50:17.455141   11340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 12:50:17.486301   11340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 12:50:17.518291   11340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 12:50:17.547578   11340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 12:50:17.577291   11340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 12:50:17.606773   11340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 12:50:17.636197   11340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 12:50:17.653353   11340 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0318 12:50:17.665054   11340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 12:50:17.696118   11340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:50:17.886491   11340 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 12:50:17.918879   11340 start.go:494] detecting cgroup driver to use...
	I0318 12:50:17.931045   11340 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 12:50:17.953923   11340 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0318 12:50:17.953923   11340 command_runner.go:130] > [Unit]
	I0318 12:50:17.953923   11340 command_runner.go:130] > Description=Docker Application Container Engine
	I0318 12:50:17.953923   11340 command_runner.go:130] > Documentation=https://docs.docker.com
	I0318 12:50:17.953923   11340 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0318 12:50:17.953923   11340 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0318 12:50:17.953923   11340 command_runner.go:130] > StartLimitBurst=3
	I0318 12:50:17.953923   11340 command_runner.go:130] > StartLimitIntervalSec=60
	I0318 12:50:17.953923   11340 command_runner.go:130] > [Service]
	I0318 12:50:17.953923   11340 command_runner.go:130] > Type=notify
	I0318 12:50:17.953923   11340 command_runner.go:130] > Restart=on-failure
	I0318 12:50:17.953923   11340 command_runner.go:130] > Environment=NO_PROXY=172.30.129.141
	I0318 12:50:17.953923   11340 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0318 12:50:17.953923   11340 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0318 12:50:17.953923   11340 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0318 12:50:17.953923   11340 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0318 12:50:17.953923   11340 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0318 12:50:17.953923   11340 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0318 12:50:17.953923   11340 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0318 12:50:17.953923   11340 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0318 12:50:17.953923   11340 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0318 12:50:17.953923   11340 command_runner.go:130] > ExecStart=
	I0318 12:50:17.953923   11340 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0318 12:50:17.953923   11340 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0318 12:50:17.953923   11340 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0318 12:50:17.953923   11340 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0318 12:50:17.953923   11340 command_runner.go:130] > LimitNOFILE=infinity
	I0318 12:50:17.953923   11340 command_runner.go:130] > LimitNPROC=infinity
	I0318 12:50:17.953923   11340 command_runner.go:130] > LimitCORE=infinity
	I0318 12:50:17.953923   11340 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0318 12:50:17.953923   11340 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0318 12:50:17.953923   11340 command_runner.go:130] > TasksMax=infinity
	I0318 12:50:17.953923   11340 command_runner.go:130] > TimeoutStartSec=0
	I0318 12:50:17.953923   11340 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0318 12:50:17.954471   11340 command_runner.go:130] > Delegate=yes
	I0318 12:50:17.954471   11340 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0318 12:50:17.954471   11340 command_runner.go:130] > KillMode=process
	I0318 12:50:17.954517   11340 command_runner.go:130] > [Install]
	I0318 12:50:17.954517   11340 command_runner.go:130] > WantedBy=multi-user.target
	I0318 12:50:17.967875   11340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:50:18.004086   11340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 12:50:18.043970   11340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:50:18.077634   11340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 12:50:18.115707   11340 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 12:50:18.175942   11340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 12:50:18.197678   11340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:50:18.227495   11340 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0318 12:50:18.239832   11340 ssh_runner.go:195] Run: which cri-dockerd
	I0318 12:50:18.245705   11340 command_runner.go:130] > /usr/bin/cri-dockerd
	I0318 12:50:18.256882   11340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 12:50:18.274492   11340 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 12:50:18.317793   11340 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 12:50:18.511544   11340 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 12:50:18.683073   11340 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 12:50:18.683186   11340 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 12:50:18.733996   11340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:50:18.927125   11340 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 12:50:21.407858   11340 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4806786s)
	I0318 12:50:21.420938   11340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 12:50:21.457762   11340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 12:50:21.492094   11340 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 12:50:21.683238   11340 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 12:50:21.871123   11340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:50:22.068696   11340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 12:50:22.110995   11340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 12:50:22.144445   11340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:50:22.341164   11340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 12:50:22.438206   11340 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 12:50:22.451124   11340 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 12:50:22.460014   11340 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0318 12:50:22.460014   11340 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0318 12:50:22.460077   11340 command_runner.go:130] > Device: 0,22	Inode: 882         Links: 1
	I0318 12:50:22.460077   11340 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0318 12:50:22.460077   11340 command_runner.go:130] > Access: 2024-03-18 12:50:22.365694979 +0000
	I0318 12:50:22.460077   11340 command_runner.go:130] > Modify: 2024-03-18 12:50:22.365694979 +0000
	I0318 12:50:22.460153   11340 command_runner.go:130] > Change: 2024-03-18 12:50:22.371694979 +0000
	I0318 12:50:22.460153   11340 command_runner.go:130] >  Birth: -
	I0318 12:50:22.460283   11340 start.go:562] Will wait 60s for crictl version
	I0318 12:50:22.474172   11340 ssh_runner.go:195] Run: which crictl
	I0318 12:50:22.480285   11340 command_runner.go:130] > /usr/bin/crictl
	I0318 12:50:22.490271   11340 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 12:50:22.561082   11340 command_runner.go:130] > Version:  0.1.0
	I0318 12:50:22.561904   11340 command_runner.go:130] > RuntimeName:  docker
	I0318 12:50:22.561958   11340 command_runner.go:130] > RuntimeVersion:  25.0.4
	I0318 12:50:22.561958   11340 command_runner.go:130] > RuntimeApiVersion:  v1
	I0318 12:50:22.561958   11340 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 12:50:22.573033   11340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 12:50:22.604593   11340 command_runner.go:130] > 25.0.4
	I0318 12:50:22.613433   11340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 12:50:22.650670   11340 command_runner.go:130] > 25.0.4
	I0318 12:50:22.654937   11340 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 12:50:22.657597   11340 out.go:177]   - env NO_PROXY=172.30.129.141
	I0318 12:50:22.660231   11340 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 12:50:22.664432   11340 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 12:50:22.664432   11340 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 12:50:22.664432   11340 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 12:50:22.664432   11340 ip.go:207] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:93:df:b5 Flags:up|broadcast|multicast|running}
	I0318 12:50:22.667424   11340 ip.go:210] interface addr: fe80::572b:6b1d:9130:e88b/64
	I0318 12:50:22.667424   11340 ip.go:210] interface addr: 172.30.128.1/20
	I0318 12:50:22.682409   11340 ssh_runner.go:195] Run: grep 172.30.128.1	host.minikube.internal$ /etc/hosts
	I0318 12:50:22.688765   11340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:50:22.709552   11340 mustload.go:65] Loading cluster: multinode-894400
	I0318 12:50:22.710336   11340 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:50:22.710586   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:50:24.730755   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:50:24.730755   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:50:24.730755   11340 host.go:66] Checking if "multinode-894400" exists ...
	I0318 12:50:24.731660   11340 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400 for IP: 172.30.140.66
	I0318 12:50:24.731660   11340 certs.go:194] generating shared ca certs ...
	I0318 12:50:24.731660   11340 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:50:24.732607   11340 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0318 12:50:24.732877   11340 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0318 12:50:24.732877   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 12:50:24.732877   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 12:50:24.733541   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 12:50:24.733568   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 12:50:24.734143   11340 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem (1338 bytes)
	W0318 12:50:24.734630   11340 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424_empty.pem, impossibly tiny 0 bytes
	I0318 12:50:24.734894   11340 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0318 12:50:24.735095   11340 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 12:50:24.735095   11340 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 12:50:24.735095   11340 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 12:50:24.736122   11340 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem (1708 bytes)
	I0318 12:50:24.736348   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem -> /usr/share/ca-certificates/13424.pem
	I0318 12:50:24.736553   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /usr/share/ca-certificates/134242.pem
	I0318 12:50:24.736714   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:50:24.737058   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 12:50:24.784251   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0318 12:50:24.831860   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 12:50:24.882145   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 12:50:24.924839   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem --> /usr/share/ca-certificates/13424.pem (1338 bytes)
	I0318 12:50:24.968181   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /usr/share/ca-certificates/134242.pem (1708 bytes)
	I0318 12:50:25.010564   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 12:50:25.072089   11340 ssh_runner.go:195] Run: openssl version
	I0318 12:50:25.081059   11340 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0318 12:50:25.093252   11340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 12:50:25.122844   11340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:50:25.130553   11340 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 18 11:07 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:50:25.130696   11340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 11:07 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:50:25.143125   11340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:50:25.151840   11340 command_runner.go:130] > b5213941
	I0318 12:50:25.162815   11340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 12:50:25.192362   11340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13424.pem && ln -fs /usr/share/ca-certificates/13424.pem /etc/ssl/certs/13424.pem"
	I0318 12:50:25.222066   11340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13424.pem
	I0318 12:50:25.228070   11340 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 18 11:25 /usr/share/ca-certificates/13424.pem
	I0318 12:50:25.228585   11340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 11:25 /usr/share/ca-certificates/13424.pem
	I0318 12:50:25.242682   11340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13424.pem
	I0318 12:50:25.252155   11340 command_runner.go:130] > 51391683
	I0318 12:50:25.262701   11340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13424.pem /etc/ssl/certs/51391683.0"
	I0318 12:50:25.291594   11340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134242.pem && ln -fs /usr/share/ca-certificates/134242.pem /etc/ssl/certs/134242.pem"
	I0318 12:50:25.320746   11340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134242.pem
	I0318 12:50:25.327693   11340 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 18 11:25 /usr/share/ca-certificates/134242.pem
	I0318 12:50:25.328255   11340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 11:25 /usr/share/ca-certificates/134242.pem
	I0318 12:50:25.339733   11340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134242.pem
	I0318 12:50:25.346894   11340 command_runner.go:130] > 3ec20f2e
	I0318 12:50:25.360478   11340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134242.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 12:50:25.390063   11340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 12:50:25.395715   11340 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 12:50:25.396392   11340 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 12:50:25.396392   11340 kubeadm.go:928] updating node {m02 172.30.140.66 8443 v1.28.4 docker false true} ...
	I0318 12:50:25.396926   11340 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-894400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.140.66
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-894400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 12:50:25.407298   11340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 12:50:25.422996   11340 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I0318 12:50:25.424222   11340 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 12:50:25.435357   11340 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 12:50:25.454249   11340 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0318 12:50:25.454249   11340 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0318 12:50:25.454249   11340 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0318 12:50:25.454249   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 12:50:25.454249   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 12:50:25.469949   11340 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 12:50:25.470569   11340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:25.472520   11340 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 12:50:25.475523   11340 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 12:50:25.475523   11340 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 12:50:25.476742   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 12:50:25.499839   11340 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 12:50:25.499932   11340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 12:50:25.510924   11340 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 12:50:25.510924   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 12:50:25.513711   11340 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 12:50:25.552280   11340 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 12:50:25.552280   11340 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 12:50:25.552812   11340 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 12:50:26.802008   11340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0318 12:50:26.821377   11340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0318 12:50:26.851908   11340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 12:50:26.898896   11340 ssh_runner.go:195] Run: grep 172.30.129.141	control-plane.minikube.internal$ /etc/hosts
	I0318 12:50:26.904827   11340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.129.141	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:50:26.937084   11340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:50:27.129712   11340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:50:27.160137   11340 host.go:66] Checking if "multinode-894400" exists ...
	I0318 12:50:27.160401   11340 start.go:316] joinCluster: &{Name:multinode-894400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-894400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.129.141 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.30.140.66 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:50:27.161015   11340 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 12:50:27.161015   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 12:50:29.220703   11340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:50:29.221254   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:50:29.221254   11340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 12:50:31.646803   11340 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 12:50:31.646803   11340 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:50:31.647346   11340 sshutil.go:53] new ssh client: &{IP:172.30.129.141 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\id_rsa Username:docker}
	I0318 12:50:31.820162   11340 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token t1ldpp.z5zs9380zwbj0fr6 --discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 
	I0318 12:50:31.820227   11340 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.6591772s)
	I0318 12:50:31.820227   11340 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.30.140.66 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0318 12:50:31.820227   11340 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t1ldpp.z5zs9380zwbj0fr6 --discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-894400-m02"
	I0318 12:50:32.057562   11340 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 12:50:34.867918   11340 command_runner.go:130] > [preflight] Running pre-flight checks
	I0318 12:50:34.868098   11340 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0318 12:50:34.868176   11340 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0318 12:50:34.868176   11340 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 12:50:34.868176   11340 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 12:50:34.868176   11340 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0318 12:50:34.868176   11340 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0318 12:50:34.868253   11340 command_runner.go:130] > This node has joined the cluster:
	I0318 12:50:34.868253   11340 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0318 12:50:34.868253   11340 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0318 12:50:34.868253   11340 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0318 12:50:34.868253   11340 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t1ldpp.z5zs9380zwbj0fr6 --discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-894400-m02": (3.0480032s)
	I0318 12:50:34.868324   11340 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 12:50:35.080972   11340 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0318 12:50:35.292336   11340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-894400-m02 minikube.k8s.io/updated_at=2024_03_18T12_50_35_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=multinode-894400 minikube.k8s.io/primary=false
	I0318 12:50:35.418822   11340 command_runner.go:130] > node/multinode-894400-m02 labeled
	I0318 12:50:35.419180   11340 start.go:318] duration metric: took 8.2587172s to joinCluster
	I0318 12:50:35.419303   11340 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.30.140.66 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0318 12:50:35.423187   11340 out.go:177] * Verifying Kubernetes components...
	I0318 12:50:35.419469   11340 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:50:35.439240   11340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:50:35.638729   11340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:50:35.664962   11340 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 12:50:35.665611   11340 kapi.go:59] client config for multinode-894400: &rest.Config{Host:"https://172.30.129.141:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-894400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-894400\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e8b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 12:50:35.666519   11340 node_ready.go:35] waiting up to 6m0s for node "multinode-894400-m02" to be "Ready" ...
	I0318 12:50:35.666619   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:35.666770   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:35.666770   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:35.666770   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:35.680525   11340 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0318 12:50:35.680525   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:35.680525   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:35.680525   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:35.680525   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:35.680525   11340 round_trippers.go:580]     Content-Length: 4043
	I0318 12:50:35.680525   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:35 GMT
	I0318 12:50:35.680525   11340 round_trippers.go:580]     Audit-Id: 48748daf-4f37-421e-ae81-a68a5d1331b9
	I0318 12:50:35.680525   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:35.680689   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"597","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3019 chars]
	I0318 12:50:36.172506   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:36.172583   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:36.172583   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:36.172650   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:36.176298   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:50:36.176361   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:36.176361   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:36.176361   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:36.176361   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:36.176361   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:36.176361   11340 round_trippers.go:580]     Content-Length: 4043
	I0318 12:50:36.176361   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:36 GMT
	I0318 12:50:36.176361   11340 round_trippers.go:580]     Audit-Id: a0f62ec2-bf00-4687-b8c6-e737fc58db71
	I0318 12:50:36.176556   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"597","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3019 chars]
	I0318 12:50:36.674096   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:36.674096   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:36.674096   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:36.674096   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:36.677616   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:50:36.677664   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:36.677664   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:36.677695   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:36.677695   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:36.677695   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:36 GMT
	I0318 12:50:36.677695   11340 round_trippers.go:580]     Audit-Id: d5b93ed3-f7e2-4a32-8bc3-bb518a87b9c8
	I0318 12:50:36.677695   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:36.677734   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"600","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0318 12:50:37.173125   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:37.173348   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:37.173348   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:37.173348   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:37.177723   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:50:37.177723   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:37.177723   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:37.177723   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:37.177723   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:37 GMT
	I0318 12:50:37.177723   11340 round_trippers.go:580]     Audit-Id: 9a84a19f-6479-43a6-8127-3b367db9bbab
	I0318 12:50:37.177723   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:37.177723   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:37.177723   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"600","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0318 12:50:37.677423   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:37.677494   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:37.677494   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:37.677494   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:37.681069   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:50:37.681069   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:37.681831   11340 round_trippers.go:580]     Audit-Id: fa61b2d7-c5bb-4d56-8482-819bcee9c153
	I0318 12:50:37.681831   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:37.681831   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:37.681831   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:37.681831   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:37.681831   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:37 GMT
	I0318 12:50:37.682080   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"600","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0318 12:50:37.682558   11340 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 12:50:38.177073   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:38.177073   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:38.177073   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:38.177073   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:38.285376   11340 round_trippers.go:574] Response Status: 200 OK in 108 milliseconds
	I0318 12:50:38.285454   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:38.285454   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:38.285454   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:38 GMT
	I0318 12:50:38.285454   11340 round_trippers.go:580]     Audit-Id: d78391e8-25fc-41df-b4ff-3f437b70ad3e
	I0318 12:50:38.285454   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:38.285454   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:38.285454   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:38.285454   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"600","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0318 12:50:38.681686   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:38.681889   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:38.681889   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:38.682004   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:38.694008   11340 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0318 12:50:38.694008   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:38.694008   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:38 GMT
	I0318 12:50:38.694008   11340 round_trippers.go:580]     Audit-Id: bd229e57-50f9-4551-aad4-2f1872d96b27
	I0318 12:50:38.694008   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:38.694008   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:38.694008   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:38.694963   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:38.695338   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"600","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0318 12:50:39.170747   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:39.170747   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:39.170747   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:39.170747   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:39.173331   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:50:39.173331   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:39.173331   11340 round_trippers.go:580]     Audit-Id: ab39449b-7567-4bc8-8d1c-075a558ea353
	I0318 12:50:39.173331   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:39.173331   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:39.173331   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:39.173331   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:39.173331   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:39 GMT
	I0318 12:50:39.174467   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"600","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0318 12:50:39.675746   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:39.675746   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:39.675746   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:39.675746   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:39.680530   11340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:50:39.680530   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:39.680530   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:39.680530   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:39.680530   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:39 GMT
	I0318 12:50:39.680530   11340 round_trippers.go:580]     Audit-Id: 8a27c275-a707-4a2c-81ac-e271974a7e69
	I0318 12:50:39.680530   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:39.680530   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:39.680530   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"600","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0318 12:50:40.167136   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:40.167193   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:40.167193   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:40.167193   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:40.175294   11340 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 12:50:40.175294   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:40.175294   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:40.175294   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:40.175294   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:40 GMT
	I0318 12:50:40.175294   11340 round_trippers.go:580]     Audit-Id: 92f10506-5f71-4bdb-8b83-fd5ae204e461
	I0318 12:50:40.175294   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:40.175294   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:40.176290   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"600","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0318 12:50:40.176290   11340 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 12:50:40.676065   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:40.676065   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:40.676065   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:40.676065   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:40.679489   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:50:40.679489   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:40.679489   11340 round_trippers.go:580]     Audit-Id: 188e2564-40fe-4db5-b3d2-7cd23d5de87c
	I0318 12:50:40.679584   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:40.679584   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:40.679584   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:40.679584   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:40.679584   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:40 GMT
	I0318 12:50:40.679774   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"600","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0318 12:50:41.168048   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:41.168048   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:41.168048   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:41.168048   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:41.171534   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:50:41.171534   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:41.172131   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:41.172131   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:41.172131   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:41.172131   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:41.172131   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:41 GMT
	I0318 12:50:41.172131   11340 round_trippers.go:580]     Audit-Id: c0b9f5f9-3899-48e3-b57b-c4d5c08f22f8
	I0318 12:50:41.172246   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"600","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0318 12:50:41.676042   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:41.676092   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:41.676151   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:41.676151   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:41.683310   11340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:50:41.683310   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:41.683310   11340 round_trippers.go:580]     Audit-Id: e46f4481-6e33-4620-84db-de5145745a85
	I0318 12:50:41.683310   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:41.683310   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:41.683310   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:41.683310   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:41.683310   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:41 GMT
	I0318 12:50:41.684041   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"600","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0318 12:50:42.167419   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:42.167621   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:42.167621   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:42.167621   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:42.175036   11340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:50:42.175036   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:42.175036   11340 round_trippers.go:580]     Audit-Id: eb9eff91-7125-4415-b7c3-d71662a70738
	I0318 12:50:42.175036   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:42.175036   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:42.175036   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:42.175036   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:42.175036   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:42 GMT
	I0318 12:50:42.175036   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"600","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0318 12:50:42.670551   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:42.670580   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:42.670627   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:42.670627   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:42.674505   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:50:42.674505   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:42.674505   11340 round_trippers.go:580]     Audit-Id: e94d17eb-375a-417b-b442-a058cb942599
	I0318 12:50:42.674505   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:42.674505   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:42.674505   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:42.675106   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:42.675106   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:42 GMT
	I0318 12:50:42.675301   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"600","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0318 12:50:42.675616   11340 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 12:50:43.177754   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:43.177848   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:43.177848   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:43.177848   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:43.184117   11340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:50:43.184299   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:43.184299   11340 round_trippers.go:580]     Audit-Id: 62dcdd00-41bc-4ed6-aa2d-97e71a91fe89
	I0318 12:50:43.184299   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:43.184299   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:43.184299   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:43.184345   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:43.184345   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:43 GMT
	I0318 12:50:43.184345   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"600","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0318 12:50:43.680340   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:43.681136   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:43.681136   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:43.681136   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:43.687466   11340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:50:43.688465   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:43.688465   11340 round_trippers.go:580]     Audit-Id: 68bb430e-2e26-41f8-a0c9-014d98e75c05
	I0318 12:50:43.688512   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:43.688512   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:43.688512   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:43.688512   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:43.688512   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:43 GMT
	I0318 12:50:43.688868   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"600","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0318 12:50:44.167925   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:44.168013   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:44.168013   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:44.168013   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:44.171522   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:50:44.171868   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:44.171868   11340 round_trippers.go:580]     Audit-Id: bdfdfdce-9184-499f-b14c-d4cb773ac1cf
	I0318 12:50:44.171868   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:44.171868   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:44.171868   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:44.171868   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:44.171868   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:44 GMT
	I0318 12:50:44.172119   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"600","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0318 12:50:44.676216   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:44.676439   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:44.676439   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:44.676439   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:44.680978   11340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:50:44.680978   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:44.680978   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:44.680978   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:44.680978   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:44.680978   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:44.680978   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:44 GMT
	I0318 12:50:44.680978   11340 round_trippers.go:580]     Audit-Id: 904e65b5-d71c-4699-81b6-bbb193462ba9
	I0318 12:50:44.680978   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"615","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0318 12:50:44.681978   11340 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 12:50:45.167260   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:45.167447   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:45.167447   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:45.167447   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:45.171045   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:50:45.171045   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:45.171045   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:45.171045   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:45.171045   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:45.171045   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:45 GMT
	I0318 12:50:45.171045   11340 round_trippers.go:580]     Audit-Id: 46753324-34aa-4ec3-8b57-b9beac744cfc
	I0318 12:50:45.171045   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:45.171581   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"615","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0318 12:50:45.675639   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:45.675693   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:45.675693   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:45.675693   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:45.679278   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:50:45.679599   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:45.679599   11340 round_trippers.go:580]     Audit-Id: c62718f7-fac5-4b58-838d-385b7040c5e3
	I0318 12:50:45.679599   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:45.679599   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:45.679599   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:45.679599   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:45.679599   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:45 GMT
	I0318 12:50:45.680045   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"615","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0318 12:50:46.168646   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:46.168646   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:46.168646   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:46.168646   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:46.172258   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:50:46.172627   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:46.172627   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:46.172627   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:46.172627   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:46.172627   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:46.172627   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:46 GMT
	I0318 12:50:46.172627   11340 round_trippers.go:580]     Audit-Id: b5473ccc-fe2c-4feb-b75b-13981fa4cacb
	I0318 12:50:46.172627   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"615","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0318 12:50:46.677391   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:46.677614   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:46.677614   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:46.677614   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:46.687679   11340 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 12:50:46.687837   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:46.687837   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:46.687837   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:46.687837   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:46.687837   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:46.687837   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:46 GMT
	I0318 12:50:46.687837   11340 round_trippers.go:580]     Audit-Id: 8f563d19-9115-493d-8a90-b8b068bdfb66
	I0318 12:50:46.688127   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"615","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0318 12:50:46.688651   11340 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 12:50:47.168424   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:47.168424   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:47.168424   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:47.168424   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:47.172009   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:50:47.172898   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:47.172898   11340 round_trippers.go:580]     Audit-Id: 9c917dca-5414-46fe-9387-51205c7e9df0
	I0318 12:50:47.172898   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:47.172898   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:47.172898   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:47.172898   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:47.172898   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:47 GMT
	I0318 12:50:47.173226   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"615","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0318 12:50:47.675435   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:47.675435   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:47.675435   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:47.675435   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:47.680057   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:50:47.680057   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:47.680057   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:47.680121   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:47 GMT
	I0318 12:50:47.680121   11340 round_trippers.go:580]     Audit-Id: fc1c3453-eb30-4c24-a32a-d145f7a50db8
	I0318 12:50:47.680121   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:47.680121   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:47.680121   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:47.680396   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"615","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0318 12:50:48.181366   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:48.181680   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:48.181680   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:48.181680   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:48.403610   11340 round_trippers.go:574] Response Status: 200 OK in 221 milliseconds
	I0318 12:50:48.404187   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:48.404187   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:48.404187   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:48.404187   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:48.404187   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:48 GMT
	I0318 12:50:48.404187   11340 round_trippers.go:580]     Audit-Id: f725b36a-6b26-4b95-8389-a5f54a057e95
	I0318 12:50:48.404187   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:48.404451   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"615","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0318 12:50:48.668894   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:48.668894   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:48.668894   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:48.668894   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:48.671770   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:50:48.671770   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:48.671770   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:48.671770   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:48.671770   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:48.671770   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:48.671770   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:48 GMT
	I0318 12:50:48.671770   11340 round_trippers.go:580]     Audit-Id: 3be690c1-e641-4a6a-96b2-127eb2c32283
	I0318 12:50:48.673307   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"615","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0318 12:50:49.170422   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:49.170422   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:49.170422   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:49.170422   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:49.175026   11340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:50:49.175026   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:49.175026   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:49.175026   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:49 GMT
	I0318 12:50:49.175026   11340 round_trippers.go:580]     Audit-Id: da15054b-aefb-4ee5-ab7a-f8d60c1b962c
	I0318 12:50:49.175026   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:49.175026   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:49.175026   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:49.175639   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"615","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0318 12:50:49.175903   11340 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 12:50:49.673158   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:49.673403   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:49.673513   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:49.673513   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:49.675795   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:50:49.676459   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:49.676459   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:49.676459   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:49 GMT
	I0318 12:50:49.676459   11340 round_trippers.go:580]     Audit-Id: ee2a47f9-1424-49b7-88fd-968dcb3a3960
	I0318 12:50:49.676459   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:49.676459   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:49.676459   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:49.676663   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"615","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0318 12:50:50.172802   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:50.173017   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:50.173017   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:50.173017   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:50.177626   11340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:50:50.177626   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:50.177949   11340 round_trippers.go:580]     Audit-Id: fbf14a9e-20bc-4881-907f-a529bdaeb260
	I0318 12:50:50.177949   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:50.177949   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:50.177949   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:50.177949   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:50.177949   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:50 GMT
	I0318 12:50:50.178259   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"615","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0318 12:50:50.670444   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:50.670700   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:50.670700   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:50.670700   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:50.675994   11340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:50:50.675994   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:50.675994   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:50 GMT
	I0318 12:50:50.675994   11340 round_trippers.go:580]     Audit-Id: 8f0f1177-4a6e-40be-9284-3587a014094a
	I0318 12:50:50.675994   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:50.675994   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:50.676496   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:50.676496   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:50.676789   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"615","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0318 12:50:51.169576   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:51.169576   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:51.169576   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:51.169576   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:51.173238   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:50:51.173238   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:51.173238   11340 round_trippers.go:580]     Audit-Id: 74a0ff5d-da81-483a-a2a9-ba7e9d15c816
	I0318 12:50:51.173238   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:51.173238   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:51.174046   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:51.174046   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:51.174046   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:51 GMT
	I0318 12:50:51.174289   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"615","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0318 12:50:51.669708   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:51.669919   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:51.669919   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:51.669919   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:51.673240   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:50:51.673240   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:51.674244   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:51.674244   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:51.674244   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:51.674304   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:51 GMT
	I0318 12:50:51.674304   11340 round_trippers.go:580]     Audit-Id: fc927b36-5808-4297-939e-0b5eb3cef4d8
	I0318 12:50:51.674333   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:51.674536   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"615","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0318 12:50:51.675095   11340 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 12:50:52.171831   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:52.171831   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:52.171831   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:52.171831   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:52.176687   11340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:50:52.176963   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:52.176963   11340 round_trippers.go:580]     Audit-Id: b1980eb8-d7d0-49a4-8f59-81d17ef83f9a
	I0318 12:50:52.176963   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:52.176963   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:52.176963   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:52.176963   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:52.176963   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:52 GMT
	I0318 12:50:52.176963   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"615","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0318 12:50:52.673771   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:52.673771   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:52.673771   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:52.674015   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:52.676805   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:50:52.677120   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:52.677120   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:52.677120   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:52.677120   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:52.677120   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:52 GMT
	I0318 12:50:52.677120   11340 round_trippers.go:580]     Audit-Id: d845ae92-252b-4769-b948-0b449821431a
	I0318 12:50:52.677120   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:52.677274   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"630","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3263 chars]
	I0318 12:50:52.678127   11340 node_ready.go:49] node "multinode-894400-m02" has status "Ready":"True"
	I0318 12:50:52.678177   11340 node_ready.go:38] duration metric: took 17.0115309s for node "multinode-894400-m02" to be "Ready" ...
	I0318 12:50:52.678177   11340 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:50:52.678177   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods
	I0318 12:50:52.678177   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:52.678177   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:52.678177   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:52.682991   11340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:50:52.682991   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:52.682991   11340 round_trippers.go:580]     Audit-Id: 75a9de15-5e1c-46a1-8c5c-98ae28f6a9df
	I0318 12:50:52.682991   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:52.682991   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:52.682991   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:52.682991   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:52.682991   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:52 GMT
	I0318 12:50:52.687630   11340 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"631"},"items":[{"metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"439","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67514 chars]
	I0318 12:50:52.691012   11340 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-456tm" in "kube-system" namespace to be "Ready" ...
	I0318 12:50:52.691239   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 12:50:52.691239   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:52.691299   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:52.691356   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:52.695488   11340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:50:52.695488   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:52.695488   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:52 GMT
	I0318 12:50:52.695488   11340 round_trippers.go:580]     Audit-Id: add94b6f-4742-4e9f-ac76-169c5c8bd5c7
	I0318 12:50:52.695488   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:52.695488   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:52.695488   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:52.696013   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:52.696284   11340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"439","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I0318 12:50:52.696764   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:50:52.696764   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:52.696764   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:52.696764   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:52.699372   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:50:52.699372   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:52.699372   11340 round_trippers.go:580]     Audit-Id: 83904898-fa4b-45d8-b39b-fb905d8b54db
	I0318 12:50:52.699372   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:52.699372   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:52.699372   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:52.699372   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:52.699760   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:52 GMT
	I0318 12:50:52.700040   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"449","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0318 12:50:52.700519   11340 pod_ready.go:92] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"True"
	I0318 12:50:52.700519   11340 pod_ready.go:81] duration metric: took 9.459ms for pod "coredns-5dd5756b68-456tm" in "kube-system" namespace to be "Ready" ...
	I0318 12:50:52.700630   11340 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 12:50:52.700724   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-894400
	I0318 12:50:52.700780   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:52.700796   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:52.700796   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:52.703415   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:50:52.703415   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:52.703415   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:52.703415   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:52.703415   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:52 GMT
	I0318 12:50:52.703415   11340 round_trippers.go:580]     Audit-Id: 2ace87fc-a0c2-4415-9e2f-0a4507ba7e19
	I0318 12:50:52.703415   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:52.703415   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:52.704720   11340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-894400","namespace":"kube-system","uid":"672a85d9-7526-4870-a33a-eac509ef3c3f","resourceVersion":"293","creationTimestamp":"2024-03-18T12:47:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.129.141:2379","kubernetes.io/config.hash":"c396fd459c503d2e9464c73cc841d3d8","kubernetes.io/config.mirror":"c396fd459c503d2e9464c73cc841d3d8","kubernetes.io/config.seen":"2024-03-18T12:47:20.228465690Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I0318 12:50:52.705121   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:50:52.705121   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:52.705121   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:52.705121   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:52.707693   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:50:52.707693   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:52.707693   11340 round_trippers.go:580]     Audit-Id: e9265323-f94f-47e1-9965-9ac4ffc3aa2b
	I0318 12:50:52.707693   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:52.707693   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:52.707693   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:52.707693   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:52.707693   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:52 GMT
	I0318 12:50:52.708262   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"449","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0318 12:50:52.708262   11340 pod_ready.go:92] pod "etcd-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 12:50:52.708262   11340 pod_ready.go:81] duration metric: took 7.6318ms for pod "etcd-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 12:50:52.708262   11340 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 12:50:52.708788   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-894400
	I0318 12:50:52.708788   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:52.708788   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:52.708788   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:52.711822   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:50:52.711886   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:52.711886   11340 round_trippers.go:580]     Audit-Id: 67461a20-59bd-4e0f-8c63-4d0e4635d695
	I0318 12:50:52.711886   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:52.711886   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:52.711886   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:52.711886   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:52.711886   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:52 GMT
	I0318 12:50:52.712321   11340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-894400","namespace":"kube-system","uid":"62aca0ea-36b0-4841-9616-61448f45e04a","resourceVersion":"314","creationTimestamp":"2024-03-18T12:47:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.30.129.141:8443","kubernetes.io/config.hash":"decc1d942b4d81359bb79c0349ffe9bb","kubernetes.io/config.mirror":"decc1d942b4d81359bb79c0349ffe9bb","kubernetes.io/config.seen":"2024-03-18T12:47:20.228466989Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I0318 12:50:52.712817   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:50:52.712817   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:52.712817   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:52.712817   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:52.715633   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:50:52.715633   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:52.715633   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:52.715633   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:52.715633   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:52.715633   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:52 GMT
	I0318 12:50:52.715633   11340 round_trippers.go:580]     Audit-Id: 3ea91f1a-104f-42eb-81c0-ee07e20b55ba
	I0318 12:50:52.715633   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:52.715633   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"449","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0318 12:50:52.715633   11340 pod_ready.go:92] pod "kube-apiserver-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 12:50:52.716640   11340 pod_ready.go:81] duration metric: took 8.3784ms for pod "kube-apiserver-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 12:50:52.716681   11340 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 12:50:52.716779   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-894400
	I0318 12:50:52.716814   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:52.716814   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:52.716814   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:52.719093   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:50:52.719093   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:52.719093   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:52.719093   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:52.719093   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:52 GMT
	I0318 12:50:52.719093   11340 round_trippers.go:580]     Audit-Id: 4c11616e-b6f6-43c0-bc7f-eeee83241879
	I0318 12:50:52.719093   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:52.719093   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:52.719093   11340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-894400","namespace":"kube-system","uid":"4ad5fc15-53ba-4ebb-9a63-b8572cd9c834","resourceVersion":"295","creationTimestamp":"2024-03-18T12:47:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d340aced56ba169ecac1e3ac58ad57fe","kubernetes.io/config.mirror":"d340aced56ba169ecac1e3ac58ad57fe","kubernetes.io/config.seen":"2024-03-18T12:47:20.228444892Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I0318 12:50:52.720229   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:50:52.720294   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:52.720294   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:52.720294   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:52.723048   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:50:52.723048   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:52.723048   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:52.723048   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:52.723048   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:52.723263   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:52.723263   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:52 GMT
	I0318 12:50:52.723263   11340 round_trippers.go:580]     Audit-Id: 4f5c0b07-be11-4a2e-b5c6-a7f94d7778a7
	I0318 12:50:52.723403   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"449","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0318 12:50:52.724119   11340 pod_ready.go:92] pod "kube-controller-manager-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 12:50:52.724189   11340 pod_ready.go:81] duration metric: took 7.5082ms for pod "kube-controller-manager-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 12:50:52.724189   11340 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8bdmn" in "kube-system" namespace to be "Ready" ...
	I0318 12:50:52.875507   11340 request.go:629] Waited for 151.2517ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8bdmn
	I0318 12:50:52.875932   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8bdmn
	I0318 12:50:52.875932   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:52.875932   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:52.875932   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:52.882404   11340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:50:52.882567   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:52.882567   11340 round_trippers.go:580]     Audit-Id: c0e6ed0e-5811-46a7-bdc5-ef91f7a3d6a9
	I0318 12:50:52.882567   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:52.882567   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:52.882567   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:52.882671   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:52.882671   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:52 GMT
	I0318 12:50:52.883619   11340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8bdmn","generateName":"kube-proxy-","namespace":"kube-system","uid":"5c266b8a-9665-4365-93c6-2b5f1699d3ef","resourceVersion":"616","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"da714af0-88aa-4fc4-b73d-4de837f06114","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da714af0-88aa-4fc4-b73d-4de837f06114\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0318 12:50:53.077828   11340 request.go:629] Waited for 193.3514ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:53.077828   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400-m02
	I0318 12:50:53.077828   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:53.078086   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:53.078086   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:53.084208   11340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:50:53.084208   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:53.084208   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:53.084208   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:53 GMT
	I0318 12:50:53.084208   11340 round_trippers.go:580]     Audit-Id: 39555712-808e-4444-a09e-eb30fcddecc7
	I0318 12:50:53.084208   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:53.084208   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:53.084208   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:53.084497   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"630","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3263 chars]
	I0318 12:50:53.085077   11340 pod_ready.go:92] pod "kube-proxy-8bdmn" in "kube-system" namespace has status "Ready":"True"
	I0318 12:50:53.085077   11340 pod_ready.go:81] duration metric: took 360.8857ms for pod "kube-proxy-8bdmn" in "kube-system" namespace to be "Ready" ...
	I0318 12:50:53.085077   11340 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mc5tv" in "kube-system" namespace to be "Ready" ...
	I0318 12:50:53.280651   11340 request.go:629] Waited for 195.1907ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mc5tv
	I0318 12:50:53.280908   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mc5tv
	I0318 12:50:53.280908   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:53.280908   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:53.280908   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:53.284230   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:50:53.284230   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:53.284230   11340 round_trippers.go:580]     Audit-Id: 4fda5cc5-1efb-456a-8a7d-159b373844a7
	I0318 12:50:53.284230   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:53.284589   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:53.284589   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:53.284589   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:53.284589   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:53 GMT
	I0318 12:50:53.285287   11340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mc5tv","generateName":"kube-proxy-","namespace":"kube-system","uid":"0afe25f8-cbd6-412b-8698-7b547d1d49ca","resourceVersion":"398","creationTimestamp":"2024-03-18T12:47:41Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"da714af0-88aa-4fc4-b73d-4de837f06114","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da714af0-88aa-4fc4-b73d-4de837f06114\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0318 12:50:53.483216   11340 request.go:629] Waited for 197.2132ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:50:53.483458   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:50:53.483458   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:53.483458   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:53.483458   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:53.489544   11340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:50:53.489544   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:53.489544   11340 round_trippers.go:580]     Audit-Id: 6afd72f6-9f49-420f-a906-81b3ec7c11a5
	I0318 12:50:53.489544   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:53.489544   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:53.489544   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:53.489544   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:53.489544   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:53 GMT
	I0318 12:50:53.490344   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"449","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0318 12:50:53.491029   11340 pod_ready.go:92] pod "kube-proxy-mc5tv" in "kube-system" namespace has status "Ready":"True"
	I0318 12:50:53.491029   11340 pod_ready.go:81] duration metric: took 405.9488ms for pod "kube-proxy-mc5tv" in "kube-system" namespace to be "Ready" ...
	I0318 12:50:53.491029   11340 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 12:50:53.685351   11340 request.go:629] Waited for 194.3208ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-894400
	I0318 12:50:53.685351   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-894400
	I0318 12:50:53.685351   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:53.685351   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:53.685351   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:53.688198   11340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:50:53.689183   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:53.689183   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:53.689183   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:53.689237   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:53.689237   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:53.689237   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:53 GMT
	I0318 12:50:53.689237   11340 round_trippers.go:580]     Audit-Id: 3b572522-4cf3-42a2-9baa-2aa31aa38351
	I0318 12:50:53.689237   11340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-894400","namespace":"kube-system","uid":"f47703ce-5a82-466e-ac8e-ef6b8cc07e6c","resourceVersion":"320","creationTimestamp":"2024-03-18T12:47:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1c745e9b917877b1ff3c90ed02e9a79a","kubernetes.io/config.mirror":"1c745e9b917877b1ff3c90ed02e9a79a","kubernetes.io/config.seen":"2024-03-18T12:47:28.428225123Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I0318 12:50:53.886399   11340 request.go:629] Waited for 195.6339ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:50:53.886581   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes/multinode-894400
	I0318 12:50:53.886581   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:53.886641   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:53.886661   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:53.891983   11340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:50:53.891983   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:53.891983   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:53.891983   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:53.891983   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:53 GMT
	I0318 12:50:53.891983   11340 round_trippers.go:580]     Audit-Id: 422685df-8d42-4f31-81c7-7a044b76e0de
	I0318 12:50:53.891983   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:53.891983   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:53.892254   11340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"449","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0318 12:50:53.892254   11340 pod_ready.go:92] pod "kube-scheduler-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 12:50:53.892831   11340 pod_ready.go:81] duration metric: took 401.7992ms for pod "kube-scheduler-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 12:50:53.892831   11340 pod_ready.go:38] duration metric: took 1.2146451s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:50:53.892893   11340 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 12:50:53.905640   11340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:53.930094   11340 system_svc.go:56] duration metric: took 37.201ms WaitForService to wait for kubelet
	I0318 12:50:53.930094   11340 kubeadm.go:576] duration metric: took 18.5105441s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:50:53.930094   11340 node_conditions.go:102] verifying NodePressure condition ...
	I0318 12:50:54.088747   11340 request.go:629] Waited for 158.6518ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.129.141:8443/api/v1/nodes
	I0318 12:50:54.088747   11340 round_trippers.go:463] GET https://172.30.129.141:8443/api/v1/nodes
	I0318 12:50:54.088968   11340 round_trippers.go:469] Request Headers:
	I0318 12:50:54.088968   11340 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:50:54.088968   11340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 12:50:54.092554   11340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:50:54.093250   11340 round_trippers.go:577] Response Headers:
	I0318 12:50:54.093250   11340 round_trippers.go:580]     Audit-Id: 820415dd-6f0a-48a5-b737-ae8ce65c3acf
	I0318 12:50:54.093400   11340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 12:50:54.093492   11340 round_trippers.go:580]     Content-Type: application/json
	I0318 12:50:54.093492   11340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 12:50:54.093582   11340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 12:50:54.093582   11340 round_trippers.go:580]     Date: Mon, 18 Mar 2024 12:50:54 GMT
	I0318 12:50:54.093969   11340 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"632"},"items":[{"metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"449","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9267 chars]
	I0318 12:50:54.094151   11340 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:50:54.094681   11340 node_conditions.go:123] node cpu capacity is 2
	I0318 12:50:54.094681   11340 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:50:54.094681   11340 node_conditions.go:123] node cpu capacity is 2
	I0318 12:50:54.094681   11340 node_conditions.go:105] duration metric: took 164.5863ms to run NodePressure ...
	I0318 12:50:54.094681   11340 start.go:240] waiting for startup goroutines ...
	I0318 12:50:54.094681   11340 start.go:254] writing updated cluster config ...
	I0318 12:50:54.108367   11340 ssh_runner.go:195] Run: rm -f paused
	I0318 12:50:54.249260   11340 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 12:50:54.252863   11340 out.go:177] * Done! kubectl is now configured to use "multinode-894400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 18 12:47:53 multinode-894400 dockerd[1339]: time="2024-03-18T12:47:53.568794625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 12:47:53 multinode-894400 dockerd[1339]: time="2024-03-18T12:47:53.571515366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 12:47:53 multinode-894400 dockerd[1339]: time="2024-03-18T12:47:53.571662563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 12:47:53 multinode-894400 dockerd[1339]: time="2024-03-18T12:47:53.571679862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 12:47:53 multinode-894400 dockerd[1339]: time="2024-03-18T12:47:53.571858758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 12:47:53 multinode-894400 cri-dockerd[1227]: time="2024-03-18T12:47:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/265b39e386cfa82eef9715aba314fbf8a9292776816cf86ed4099004698cb320/resolv.conf as [nameserver 172.30.128.1]"
	Mar 18 12:47:53 multinode-894400 cri-dockerd[1227]: time="2024-03-18T12:47:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d001e299e996b74f090c73f4e98605ef0d323a96826d23424e724ca8e7fe466a/resolv.conf as [nameserver 172.30.128.1]"
	Mar 18 12:47:53 multinode-894400 dockerd[1339]: time="2024-03-18T12:47:53.877102898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 12:47:53 multinode-894400 dockerd[1339]: time="2024-03-18T12:47:53.877258595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 12:47:53 multinode-894400 dockerd[1339]: time="2024-03-18T12:47:53.877325393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 12:47:53 multinode-894400 dockerd[1339]: time="2024-03-18T12:47:53.877573988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 12:47:54 multinode-894400 dockerd[1339]: time="2024-03-18T12:47:54.039513225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 12:47:54 multinode-894400 dockerd[1339]: time="2024-03-18T12:47:54.039595810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 12:47:54 multinode-894400 dockerd[1339]: time="2024-03-18T12:47:54.039776377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 12:47:54 multinode-894400 dockerd[1339]: time="2024-03-18T12:47:54.039887657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 12:51:18 multinode-894400 dockerd[1339]: time="2024-03-18T12:51:18.573901075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 12:51:18 multinode-894400 dockerd[1339]: time="2024-03-18T12:51:18.582892741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 12:51:18 multinode-894400 dockerd[1339]: time="2024-03-18T12:51:18.583041047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 12:51:18 multinode-894400 dockerd[1339]: time="2024-03-18T12:51:18.583998886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 12:51:18 multinode-894400 cri-dockerd[1227]: time="2024-03-18T12:51:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a23c1189be7c39f22c8a59ba41b198519894c9783ecad8dcda79fb8a76f05254/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 18 12:51:20 multinode-894400 cri-dockerd[1227]: time="2024-03-18T12:51:20Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Mar 18 12:51:20 multinode-894400 dockerd[1339]: time="2024-03-18T12:51:20.297091036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 18 12:51:20 multinode-894400 dockerd[1339]: time="2024-03-18T12:51:20.297309738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 18 12:51:20 multinode-894400 dockerd[1339]: time="2024-03-18T12:51:20.297386739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 18 12:51:20 multinode-894400 dockerd[1339]: time="2024-03-18T12:51:20.299791363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dd031b5cb1e85       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   47 seconds ago      Running             busybox                   0                   a23c1189be7c3       busybox-5b5d89c9d6-c2997
	693a64f7472fd       ead0a4a53df89                                                                                         4 minutes ago       Running             coredns                   0                   d001e299e996b       coredns-5dd5756b68-456tm
	a2c499223090c       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   265b39e386cfa       storage-provisioner
	c4d7018ad23a7       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              4 minutes ago       Running             kindnet-cni               0                   a47b1fb60692c       kindnet-hhsxh
	9335855aab63d       83f6cc407eed8                                                                                         4 minutes ago       Running             kube-proxy                0                   60e9cd749c8f6       kube-proxy-mc5tv
	e4d42739ce0e9       e3db313c6dbc0                                                                                         4 minutes ago       Running             kube-scheduler            0                   82710777e700c       kube-scheduler-multinode-894400
	7aa5cf4ec378e       d058aa5ab969c                                                                                         4 minutes ago       Running             kube-controller-manager   0                   5485f509825d9       kube-controller-manager-multinode-894400
	c51f768a2f642       73deb9a3f7025                                                                                         4 minutes ago       Running             etcd                      0                   220884cbf1f5b       etcd-multinode-894400
	56d1819beb10e       7fe0e6f37db33                                                                                         4 minutes ago       Running             kube-apiserver            0                   acffce2e73842       kube-apiserver-multinode-894400
	
	
	==> coredns [693a64f7472f] <==
	[INFO] 10.244.1.2:44081 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089501s
	[INFO] 10.244.0.3:52580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182502s
	[INFO] 10.244.0.3:60982 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000727s
	[INFO] 10.244.0.3:53685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081s
	[INFO] 10.244.0.3:38117 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127701s
	[INFO] 10.244.0.3:38455 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000117101s
	[INFO] 10.244.0.3:50629 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121702s
	[INFO] 10.244.0.3:33301 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000487s
	[INFO] 10.244.0.3:38091 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138402s
	[INFO] 10.244.1.2:43364 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192902s
	[INFO] 10.244.1.2:42609 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060701s
	[INFO] 10.244.1.2:36443 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051301s
	[INFO] 10.244.1.2:56414 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000526s
	[INFO] 10.244.0.3:50774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137201s
	[INFO] 10.244.0.3:43237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000196902s
	[INFO] 10.244.0.3:38831 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059901s
	[INFO] 10.244.0.3:56163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122801s
	[INFO] 10.244.1.2:58305 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209602s
	[INFO] 10.244.1.2:58291 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000151202s
	[INFO] 10.244.1.2:33227 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184302s
	[INFO] 10.244.1.2:58179 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152102s
	[INFO] 10.244.0.3:46943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104101s
	[INFO] 10.244.0.3:58018 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000107001s
	[INFO] 10.244.0.3:35353 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119601s
	[INFO] 10.244.0.3:58763 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075701s
	
	
	==> describe nodes <==
	Name:               multinode-894400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-894400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=multinode-894400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T12_47_29_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:47:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-894400
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:52:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:51:35 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:51:35 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:51:35 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:51:35 +0000   Mon, 18 Mar 2024 12:47:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.129.141
	  Hostname:    multinode-894400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 6aa5c8af908b4ffaacd7ee67b72845ba
	  System UUID:                5c78c013-e4e8-1041-99c8-95cd760ef34f
	  Boot ID:                    c0576fd6-ce1a-4216-ae92-6fa648fcb27c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-c2997                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 coredns-5dd5756b68-456tm                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m26s
	  kube-system                 etcd-multinode-894400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m42s
	  kube-system                 kindnet-hhsxh                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m27s
	  kube-system                 kube-apiserver-multinode-894400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-controller-manager-multinode-894400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 kube-proxy-mc5tv                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 kube-scheduler-multinode-894400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m25s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m48s (x8 over 4m48s)  kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m48s (x8 over 4m48s)  kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m48s (x7 over 4m48s)  kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m40s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m40s                  kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m40s                  kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m40s                  kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m27s                  node-controller  Node multinode-894400 event: Registered Node multinode-894400 in Controller
	  Normal  NodeReady                4m16s                  kubelet          Node multinode-894400 status is now: NodeReady
	
	
	Name:               multinode-894400-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-894400-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=multinode-894400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T12_50_35_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:50:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-894400-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:52:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:51:35 +0000   Mon, 18 Mar 2024 12:50:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:51:35 +0000   Mon, 18 Mar 2024 12:50:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:51:35 +0000   Mon, 18 Mar 2024 12:50:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:51:35 +0000   Mon, 18 Mar 2024 12:50:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.140.66
	  Hostname:    multinode-894400-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 209753fe156d43e08ee40e815598ed17
	  System UUID:                fa19d46a-a3a2-9249-8c21-1edbfcedff01
	  Boot ID:                    0e15b7cf-29d6-40f7-ad78-fb04b10bea99
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-8btgf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 kindnet-k5lpg               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      94s
	  kube-system                 kube-proxy-8bdmn            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 83s                kube-proxy       
	  Normal  NodeHasSufficientMemory  94s (x5 over 96s)  kubelet          Node multinode-894400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s (x5 over 96s)  kubelet          Node multinode-894400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s (x5 over 96s)  kubelet          Node multinode-894400-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           92s                node-controller  Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller
	  Normal  NodeReady                76s                kubelet          Node multinode-894400-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar18 12:46] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.156030] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[ +29.357903] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.127543] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.500671] systemd-fstab-generator[984]: Ignoring "noauto" option for root device
	[  +0.172419] systemd-fstab-generator[996]: Ignoring "noauto" option for root device
	[  +0.218212] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[  +2.730639] systemd-fstab-generator[1180]: Ignoring "noauto" option for root device
	[  +0.182619] systemd-fstab-generator[1192]: Ignoring "noauto" option for root device
	[  +0.167136] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.229787] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	[Mar18 12:47] systemd-fstab-generator[1325]: Ignoring "noauto" option for root device
	[  +0.107658] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.242637] systemd-fstab-generator[1520]: Ignoring "noauto" option for root device
	[  +5.447842] systemd-fstab-generator[1778]: Ignoring "noauto" option for root device
	[  +0.106657] kauditd_printk_skb: 73 callbacks suppressed
	[  +8.322297] systemd-fstab-generator[2740]: Ignoring "noauto" option for root device
	[  +0.128597] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.168708] systemd-fstab-generator[4385]: Ignoring "noauto" option for root device
	[  +0.303403] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.091483] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.190395] kauditd_printk_skb: 9 callbacks suppressed
	[Mar18 12:48] hrtimer: interrupt took 1214143 ns
	[Mar18 12:51] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [c51f768a2f64] <==
	{"level":"info","ts":"2024-03-18T12:47:22.625195Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T12:47:22.625316Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T12:47:22.625507Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T12:47:22.625797Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T12:47:22.625999Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T12:47:22.626174Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T12:47:22.63425Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.30.129.141:2379"}
	{"level":"info","ts":"2024-03-18T12:47:22.636017Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-03-18T12:50:27.100449Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.350587ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15211627172527367282 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.30.129.141\" mod_revision:557 > success:<request_put:<key:\"/registry/masterleases/172.30.129.141\" value_size:67 lease:5988255135672591472 >> failure:<request_range:<key:\"/registry/masterleases/172.30.129.141\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-18T12:50:27.100867Z","caller":"traceutil/trace.go:171","msg":"trace[887807528] transaction","detail":"{read_only:false; response_revision:565; number_of_response:1; }","duration":"353.610856ms","start":"2024-03-18T12:50:26.747239Z","end":"2024-03-18T12:50:27.10085Z","steps":["trace[887807528] 'process raft request'  (duration: 180.105063ms)","trace[887807528] 'compare'  (duration: 172.163068ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T12:50:27.101165Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T12:50:26.747223Z","time spent":"353.794173ms","remote":"127.0.0.1:45120","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.30.129.141\" mod_revision:557 > success:<request_put:<key:\"/registry/masterleases/172.30.129.141\" value_size:67 lease:5988255135672591472 >> failure:<request_range:<key:\"/registry/masterleases/172.30.129.141\" > >"}
	{"level":"info","ts":"2024-03-18T12:50:38.023604Z","caller":"traceutil/trace.go:171","msg":"trace[1659257320] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"326.235634ms","start":"2024-03-18T12:50:37.697348Z","end":"2024-03-18T12:50:38.023584Z","steps":["trace[1659257320] 'process raft request'  (duration: 326.013517ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:50:38.024124Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T12:50:37.697333Z","time spent":"326.518956ms","remote":"127.0.0.1:45238","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1101,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:598 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-03-18T12:50:38.275834Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.245523ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-894400-m02\" ","response":"range_response_count:1 size:2968"}
	{"level":"info","ts":"2024-03-18T12:50:38.275953Z","caller":"traceutil/trace.go:171","msg":"trace[1364019686] range","detail":"{range_begin:/registry/minions/multinode-894400-m02; range_end:; response_count:1; response_revision:602; }","duration":"103.398135ms","start":"2024-03-18T12:50:38.172541Z","end":"2024-03-18T12:50:38.275939Z","steps":["trace[1364019686] 'range keys from in-memory index tree'  (duration: 102.970001ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:50:48.394435Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.674903ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-894400-m02\" ","response":"range_response_count:1 size:3148"}
	{"level":"info","ts":"2024-03-18T12:50:48.394842Z","caller":"traceutil/trace.go:171","msg":"trace[1206437671] range","detail":"{range_begin:/registry/minions/multinode-894400-m02; range_end:; response_count:1; response_revision:620; }","duration":"218.09143ms","start":"2024-03-18T12:50:48.176733Z","end":"2024-03-18T12:50:48.394825Z","steps":["trace[1206437671] 'range keys from in-memory index tree'  (duration: 217.483591ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:50:48.394505Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.488353ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-03-18T12:50:48.395378Z","caller":"traceutil/trace.go:171","msg":"trace[1739873596] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:620; }","duration":"296.35951ms","start":"2024-03-18T12:50:48.099004Z","end":"2024-03-18T12:50:48.395363Z","steps":["trace[1739873596] 'range keys from in-memory index tree'  (duration: 295.338642ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:50:48.594701Z","caller":"traceutil/trace.go:171","msg":"trace[538021382] linearizableReadLoop","detail":"{readStateIndex:677; appliedIndex:676; }","duration":"163.586125ms","start":"2024-03-18T12:50:48.431094Z","end":"2024-03-18T12:50:48.59468Z","steps":["trace[538021382] 'read index received'  (duration: 163.316907ms)","trace[538021382] 'applied index is now lower than readState.Index'  (duration: 268.218µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T12:50:48.595058Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.011352ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T12:50:48.595167Z","caller":"traceutil/trace.go:171","msg":"trace[1079957739] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:621; }","duration":"164.172863ms","start":"2024-03-18T12:50:48.430982Z","end":"2024-03-18T12:50:48.595155Z","steps":["trace[1079957739] 'agreement among raft nodes before linearized reading'  (duration: 163.97985ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:50:48.595063Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.926762ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T12:50:48.595588Z","caller":"traceutil/trace.go:171","msg":"trace[1891390391] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; response_count:0; response_revision:621; }","duration":"134.455897ms","start":"2024-03-18T12:50:48.461123Z","end":"2024-03-18T12:50:48.595579Z","steps":["trace[1891390391] 'agreement among raft nodes before linearized reading'  (duration: 133.910661ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:50:48.595115Z","caller":"traceutil/trace.go:171","msg":"trace[1584536017] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"184.808729ms","start":"2024-03-18T12:50:48.4103Z","end":"2024-03-18T12:50:48.595108Z","steps":["trace[1584536017] 'process raft request'  (duration: 184.174287ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:52:08 up 6 min,  0 users,  load average: 0.60, 0.60, 0.31
	Linux multinode-894400 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c4d7018ad23a] <==
	I0318 12:50:59.630842       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 12:51:09.637456       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 12:51:09.637579       1 main.go:227] handling current node
	I0318 12:51:09.637594       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 12:51:09.637604       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 12:51:19.651223       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 12:51:19.651321       1 main.go:227] handling current node
	I0318 12:51:19.651406       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 12:51:19.651417       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 12:51:29.658038       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 12:51:29.658139       1 main.go:227] handling current node
	I0318 12:51:29.658155       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 12:51:29.658163       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 12:51:39.672599       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 12:51:39.673088       1 main.go:227] handling current node
	I0318 12:51:39.673469       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 12:51:39.673527       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 12:51:49.680026       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 12:51:49.680559       1 main.go:227] handling current node
	I0318 12:51:49.680883       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 12:51:49.681053       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 12:51:59.694196       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 12:51:59.694333       1 main.go:227] handling current node
	I0318 12:51:59.694347       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 12:51:59.694355       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [56d1819beb10] <==
	I0318 12:47:24.475501       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 12:47:24.475617       1 cache.go:39] Caches are synced for autoregister controller
	I0318 12:47:24.526478       1 controller.go:624] quota admission added evaluator for: namespaces
	I0318 12:47:24.527708       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 12:47:24.527754       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 12:47:24.528411       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 12:47:24.528445       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 12:47:24.529540       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E0318 12:47:24.589123       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0318 12:47:24.803528       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 12:47:25.330003       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0318 12:47:25.336758       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0318 12:47:25.336792       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 12:47:26.412435       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 12:47:26.512988       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 12:47:26.641360       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0318 12:47:26.654364       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.30.129.141]
	I0318 12:47:26.656300       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 12:47:26.682853       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 12:47:27.399010       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 12:47:28.286764       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 12:47:28.315284       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0318 12:47:28.336276       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 12:47:41.560414       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0318 12:47:41.867505       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [7aa5cf4ec378] <==
	I0318 12:47:52.958102       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="163.297µs"
	I0318 12:47:52.991751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.399µs"
	I0318 12:47:54.194916       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.289µs"
	I0318 12:47:55.238088       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.595936ms"
	I0318 12:47:55.238222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="45.592µs"
	I0318 12:47:56.090728       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0318 12:50:34.419485       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m02\" does not exist"
	I0318 12:50:34.437576       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m02" podCIDRs=["10.244.1.0/24"]
	I0318 12:50:34.454919       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8bdmn"
	I0318 12:50:34.479103       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k5lpg"
	I0318 12:50:36.121925       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m02"
	I0318 12:50:36.122368       1 event.go:307] "Event occurred" object="multinode-894400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller"
	I0318 12:50:52.539955       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 12:51:17.964827       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0318 12:51:17.986964       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-8btgf"
	I0318 12:51:18.004592       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-c2997"
	I0318 12:51:18.026894       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="62.79508ms"
	I0318 12:51:18.045074       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="17.513513ms"
	I0318 12:51:18.046404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="36.101µs"
	I0318 12:51:18.054157       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="337.914µs"
	I0318 12:51:18.060516       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="26.701µs"
	I0318 12:51:20.804047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="10.125602ms"
	I0318 12:51:20.804333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="159.502µs"
	I0318 12:51:21.064706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="11.788417ms"
	I0318 12:51:21.065229       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="82.401µs"
	
	
	==> kube-proxy [9335855aab63] <==
	I0318 12:47:42.888603       1 server_others.go:69] "Using iptables proxy"
	I0318 12:47:42.909658       1 node.go:141] Successfully retrieved node IP: 172.30.129.141
	I0318 12:47:42.965774       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:47:42.965824       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:47:42.983172       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:47:42.983221       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:47:42.983471       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:47:42.983484       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:47:42.987719       1 config.go:188] "Starting service config controller"
	I0318 12:47:42.987733       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:47:42.987775       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:47:42.987781       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:47:42.988298       1 config.go:315] "Starting node config controller"
	I0318 12:47:42.988306       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:47:43.088485       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:47:43.088594       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:47:43.088517       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e4d42739ce0e] <==
	W0318 12:47:25.374605       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 12:47:25.374678       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 12:47:25.400777       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 12:47:25.400820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 12:47:25.434442       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 12:47:25.434526       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 12:47:25.456878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 12:47:25.457121       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 12:47:25.744652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 12:47:25.744733       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 12:47:25.777073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 12:47:25.777145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 12:47:25.850949       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 12:47:25.850985       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 12:47:25.876908       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 12:47:25.877170       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 12:47:25.892072       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 12:47:25.892099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 12:47:25.988864       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 12:47:25.988912       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 12:47:26.044749       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 12:47:26.044834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 12:47:26.067659       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 12:47:26.068250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:47:28.178584       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 12:47:54 multinode-894400 kubelet[2774]: I0318 12:47:54.165043    2774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=5.164976075 podCreationTimestamp="2024-03-18 12:47:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 12:47:54.164163222 +0000 UTC m=+25.928126755" watchObservedRunningTime="2024-03-18 12:47:54.164976075 +0000 UTC m=+25.928939508"
	Mar 18 12:47:55 multinode-894400 kubelet[2774]: I0318 12:47:55.211417    2774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-456tm" podStartSLOduration=13.211356801 podCreationTimestamp="2024-03-18 12:47:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 12:47:54.192677152 +0000 UTC m=+25.956640585" watchObservedRunningTime="2024-03-18 12:47:55.211356801 +0000 UTC m=+26.975320234"
	Mar 18 12:48:28 multinode-894400 kubelet[2774]: E0318 12:48:28.599028    2774 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:48:28 multinode-894400 kubelet[2774]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:48:28 multinode-894400 kubelet[2774]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:48:28 multinode-894400 kubelet[2774]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:48:28 multinode-894400 kubelet[2774]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:49:28 multinode-894400 kubelet[2774]: E0318 12:49:28.597178    2774 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:49:28 multinode-894400 kubelet[2774]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:49:28 multinode-894400 kubelet[2774]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:49:28 multinode-894400 kubelet[2774]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:49:28 multinode-894400 kubelet[2774]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:50:28 multinode-894400 kubelet[2774]: E0318 12:50:28.598065    2774 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:50:28 multinode-894400 kubelet[2774]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:50:28 multinode-894400 kubelet[2774]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:50:28 multinode-894400 kubelet[2774]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:50:28 multinode-894400 kubelet[2774]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:51:18 multinode-894400 kubelet[2774]: I0318 12:51:18.026418    2774 topology_manager.go:215] "Topology Admit Handler" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f" podNamespace="default" podName="busybox-5b5d89c9d6-c2997"
	Mar 18 12:51:18 multinode-894400 kubelet[2774]: I0318 12:51:18.085575    2774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwqqg\" (UniqueName: \"kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg\") pod \"busybox-5b5d89c9d6-c2997\" (UID: \"171cbfa4-4415-4169-b25d-ff5905fd513f\") " pod="default/busybox-5b5d89c9d6-c2997"
	Mar 18 12:51:21 multinode-894400 kubelet[2774]: I0318 12:51:21.051972    2774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5b5d89c9d6-c2997" podStartSLOduration=2.829846045 podCreationTimestamp="2024-03-18 12:51:17 +0000 UTC" firstStartedPulling="2024-03-18 12:51:18.840426421 +0000 UTC m=+230.604389854" lastFinishedPulling="2024-03-18 12:51:20.062507583 +0000 UTC m=+231.826471116" observedRunningTime="2024-03-18 12:51:21.050839796 +0000 UTC m=+232.814803329" watchObservedRunningTime="2024-03-18 12:51:21.051927307 +0000 UTC m=+232.815890740"
	Mar 18 12:51:28 multinode-894400 kubelet[2774]: E0318 12:51:28.598570    2774 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:51:28 multinode-894400 kubelet[2774]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:51:28 multinode-894400 kubelet[2774]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:51:28 multinode-894400 kubelet[2774]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:51:28 multinode-894400 kubelet[2774]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 12:52:00.543772    9336 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-894400 -n multinode-894400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-894400 -n multinode-894400: (11.7048229s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-894400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (55.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (550.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-894400
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-894400
E0318 13:06:13.066550   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-894400: (1m34.2701139s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-894400 --wait=true -v=8 --alsologtostderr
E0318 13:08:45.329869   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 13:10:42.106798   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 13:11:13.081702   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
E0318 13:12:36.298008   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-894400 --wait=true -v=8 --alsologtostderr: exit status 1 (6m46.223967s)

                                                
                                                
-- stdout --
	* [multinode-894400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-894400" primary control-plane node in "multinode-894400" cluster
	* Restarting existing hyperv VM for "multinode-894400" ...
	* Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-894400-m02" worker node in "multinode-894400" cluster
	* Restarting existing hyperv VM for "multinode-894400-m02" ...
	* Found network options:
	  - NO_PROXY=172.30.130.156
	  - NO_PROXY=172.30.130.156
	* Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	  - env NO_PROXY=172.30.130.156
	* Verifying Kubernetes components...
	
	* Starting "multinode-894400-m03" worker node in "multinode-894400" cluster
	* Restarting existing hyperv VM for "multinode-894400-m03" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:07:44.981061    2404 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0318 13:07:45.061560    2404 out.go:291] Setting OutFile to fd 884 ...
	I0318 13:07:45.062552    2404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:07:45.062552    2404 out.go:304] Setting ErrFile to fd 980...
	I0318 13:07:45.062552    2404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:07:45.086104    2404 out.go:298] Setting JSON to false
	I0318 13:07:45.089099    2404 start.go:129] hostinfo: {"hostname":"minikube3","uptime":315842,"bootTime":1710451423,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0318 13:07:45.090082    2404 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:07:45.155572    2404 out.go:177] * [multinode-894400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0318 13:07:45.358080    2404 notify.go:220] Checking for updates...
	I0318 13:07:45.406333    2404 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 13:07:45.602067    2404 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:07:45.751151    2404 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0318 13:07:45.795260    2404 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 13:07:45.966387    2404 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:07:45.995191    2404 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:07:45.995464    2404 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:07:51.334489    2404 out.go:177] * Using the hyperv driver based on existing profile
	I0318 13:07:51.354391    2404 start.go:297] selected driver: hyperv
	I0318 13:07:51.354391    2404 start.go:901] validating driver "hyperv" against &{Name:multinode-894400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.28.4 ClusterName:multinode-894400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.129.141 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.30.140.66 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.30.137.140 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:07:51.355451    2404 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:07:51.407703    2404 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:07:51.407864    2404 cni.go:84] Creating CNI manager for ""
	I0318 13:07:51.408004    2404 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 13:07:51.408046    2404 start.go:340] cluster config:
	{Name:multinode-894400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-894400 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.129.141 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.30.140.66 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.30.137.140 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provision
er:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:07:51.408046    2404 iso.go:125] acquiring lock: {Name:mkefa7a887b9de67c22e0418da52288a5b488924 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:07:51.564477    2404 out.go:177] * Starting "multinode-894400" primary control-plane node in "multinode-894400" cluster
	I0318 13:07:51.693483    2404 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:07:51.694577    2404 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 13:07:51.694727    2404 cache.go:56] Caching tarball of preloaded images
	I0318 13:07:51.695169    2404 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 13:07:51.695416    2404 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:07:51.695736    2404 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\config.json ...
	I0318 13:07:51.698526    2404 start.go:360] acquireMachinesLock for multinode-894400: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:07:51.698526    2404 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-894400"
	I0318 13:07:51.699058    2404 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:07:51.699374    2404 fix.go:54] fixHost starting: 
	I0318 13:07:51.699539    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:07:54.199225    2404 main.go:141] libmachine: [stdout =====>] : Off
	
	I0318 13:07:54.200091    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:07:54.200153    2404 fix.go:112] recreateIfNeeded on multinode-894400: state=Stopped err=<nil>
	W0318 13:07:54.200153    2404 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:07:54.538991    2404 out.go:177] * Restarting existing hyperv VM for "multinode-894400" ...
	I0318 13:07:54.545864    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-894400
	I0318 13:07:57.546532    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:07:57.546957    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:07:57.546957    2404 main.go:141] libmachine: Waiting for host to start...
	I0318 13:07:57.547040    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:07:59.610987    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:07:59.611165    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:07:59.611165    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:01.954297    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:08:01.954781    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:02.968268    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:05.123037    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:05.123684    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:05.123751    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:07.471155    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:08:07.471340    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:08.486928    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:10.578478    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:10.578755    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:10.578755    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:12.960616    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:08:12.961816    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:13.964258    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:16.051492    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:16.051492    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:16.051703    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:18.403955    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:08:18.403955    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:19.418394    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:21.591796    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:21.591796    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:21.591796    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:23.944375    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:08:23.945033    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:23.947675    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:25.950616    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:25.950616    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:25.950616    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:28.288833    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:08:28.289348    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:28.289642    2404 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\config.json ...
	I0318 13:08:28.292179    2404 machine.go:94] provisionDockerMachine start ...
	I0318 13:08:28.292317    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:30.253975    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:30.253975    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:30.253975    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:32.591711    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:08:32.592750    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:32.597818    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:08:32.598554    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.156 22 <nil> <nil>}
	I0318 13:08:32.598554    2404 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:08:32.728683    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:08:32.728683    2404 buildroot.go:166] provisioning hostname "multinode-894400"
	I0318 13:08:32.728683    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:34.671039    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:34.671039    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:34.671039    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:36.995853    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:08:36.995936    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:37.000875    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:08:37.001637    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.156 22 <nil> <nil>}
	I0318 13:08:37.001637    2404 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-894400 && echo "multinode-894400" | sudo tee /etc/hostname
	I0318 13:08:37.163030    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-894400
	
	I0318 13:08:37.163121    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:39.133266    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:39.133266    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:39.133266    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:41.468784    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:08:41.468784    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:41.473865    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:08:41.473990    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.156 22 <nil> <nil>}
	I0318 13:08:41.473990    2404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-894400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-894400/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-894400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:08:41.622362    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:08:41.622412    2404 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0318 13:08:41.622412    2404 buildroot.go:174] setting up certificates
	I0318 13:08:41.622494    2404 provision.go:84] configureAuth start
	I0318 13:08:41.622549    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:43.566483    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:43.566483    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:43.567401    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:45.896227    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:08:45.896227    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:45.896425    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:47.849564    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:47.849564    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:47.850470    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:50.150060    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:08:50.150060    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:50.150182    2404 provision.go:143] copyHostCerts
	I0318 13:08:50.150363    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0318 13:08:50.150718    2404 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0318 13:08:50.150859    2404 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0318 13:08:50.151181    2404 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 13:08:50.152316    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0318 13:08:50.152592    2404 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0318 13:08:50.152680    2404 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0318 13:08:50.153033    2404 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 13:08:50.154063    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0318 13:08:50.154568    2404 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0318 13:08:50.154658    2404 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0318 13:08:50.155120    2404 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 13:08:50.156099    2404 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-894400 san=[127.0.0.1 172.30.130.156 localhost minikube multinode-894400]
	I0318 13:08:50.556381    2404 provision.go:177] copyRemoteCerts
	I0318 13:08:50.568350    2404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:08:50.568448    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:52.521135    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:52.521994    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:52.521994    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:54.835487    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:08:54.835487    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:54.836091    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\id_rsa Username:docker}
	I0318 13:08:54.947574    2404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3790934s)
	I0318 13:08:54.947574    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 13:08:54.947574    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:08:54.992137    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 13:08:54.992137    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0318 13:08:55.034093    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 13:08:55.034588    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:08:55.080638    2404 provision.go:87] duration metric: took 13.4580448s to configureAuth
	I0318 13:08:55.080638    2404 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:08:55.081315    2404 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:08:55.081315    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:57.052754    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:57.052754    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:57.053568    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:59.418518    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:08:59.418518    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:59.425154    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:08:59.425728    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.156 22 <nil> <nil>}
	I0318 13:08:59.425728    2404 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 13:08:59.564178    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 13:08:59.564270    2404 buildroot.go:70] root file system type: tmpfs
	I0318 13:08:59.564583    2404 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 13:08:59.564677    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:09:01.515886    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:09:01.516747    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:01.516747    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:09:03.892924    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:09:03.892924    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:03.899565    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:09:03.899785    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.156 22 <nil> <nil>}
	I0318 13:09:03.899785    2404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 13:09:04.044182    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 13:09:04.044281    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:09:06.009362    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:09:06.009598    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:06.009598    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:09:08.373329    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:09:08.373329    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:08.380939    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:09:08.380939    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.156 22 <nil> <nil>}
	I0318 13:09:08.380939    2404 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 13:09:10.702931    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 13:09:10.703031    2404 machine.go:97] duration metric: took 42.4103689s to provisionDockerMachine
	I0318 13:09:10.703031    2404 start.go:293] postStartSetup for "multinode-894400" (driver="hyperv")
	I0318 13:09:10.703031    2404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:09:10.714806    2404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:09:10.714806    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:09:12.690757    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:09:12.690757    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:12.690757    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:09:15.002562    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:09:15.002562    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:15.003535    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\id_rsa Username:docker}
	I0318 13:09:15.104738    2404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3898466s)
	I0318 13:09:15.116973    2404 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:09:15.123850    2404 command_runner.go:130] > NAME=Buildroot
	I0318 13:09:15.123850    2404 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0318 13:09:15.123850    2404 command_runner.go:130] > ID=buildroot
	I0318 13:09:15.123850    2404 command_runner.go:130] > VERSION_ID=2023.02.9
	I0318 13:09:15.123850    2404 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0318 13:09:15.124023    2404 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:09:15.124023    2404 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0318 13:09:15.124157    2404 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0318 13:09:15.125445    2404 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> 134242.pem in /etc/ssl/certs
	I0318 13:09:15.125557    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /etc/ssl/certs/134242.pem
	I0318 13:09:15.136870    2404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:09:15.156887    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /etc/ssl/certs/134242.pem (1708 bytes)
	I0318 13:09:15.198196    2404 start.go:296] duration metric: took 4.4951325s for postStartSetup
	I0318 13:09:15.198319    2404 fix.go:56] duration metric: took 1m23.4985048s for fixHost
	I0318 13:09:15.198425    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:09:17.145092    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:09:17.145950    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:17.146060    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:09:19.522583    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:09:19.522637    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:19.527330    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:09:19.527723    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.156 22 <nil> <nil>}
	I0318 13:09:19.527723    2404 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 13:09:19.660726    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710767359.663042224
	
	I0318 13:09:19.660726    2404 fix.go:216] guest clock: 1710767359.663042224
	I0318 13:09:19.660726    2404 fix.go:229] Guest: 2024-03-18 13:09:19.663042224 +0000 UTC Remote: 2024-03-18 13:09:15.1983195 +0000 UTC m=+90.304486701 (delta=4.464722724s)
	I0318 13:09:19.661279    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:09:21.723749    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:09:21.723749    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:21.723823    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:09:24.103065    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:09:24.103815    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:24.109309    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:09:24.110078    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.156 22 <nil> <nil>}
	I0318 13:09:24.110078    2404 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710767359
	I0318 13:09:24.254946    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 13:09:19 UTC 2024
	
	I0318 13:09:24.255017    2404 fix.go:236] clock set: Mon Mar 18 13:09:19 UTC 2024
	 (err=<nil>)
	I0318 13:09:24.255017    2404 start.go:83] releasing machines lock for "multinode-894400", held for 1m32.5558023s
	I0318 13:09:24.255302    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:09:26.237027    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:09:26.237027    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:26.237572    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:09:28.679757    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:09:28.680429    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:28.684468    2404 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:09:28.684546    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:09:28.694706    2404 ssh_runner.go:195] Run: cat /version.json
	I0318 13:09:28.694706    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:09:30.747990    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:09:30.748389    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:30.748511    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:09:30.768951    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:09:30.768951    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:30.769952    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:09:33.296966    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:09:33.297555    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:33.297555    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\id_rsa Username:docker}
	I0318 13:09:33.319431    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:09:33.319431    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:33.319431    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\id_rsa Username:docker}
	I0318 13:09:33.485899    2404 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0318 13:09:33.486822    2404 command_runner.go:130] > {"iso_version": "v1.32.1-1710520390-17991", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "3dd306d082737a9ddf335108b42c9fcb2ad84298"}
	I0318 13:09:33.486822    2404 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8023182s)
	I0318 13:09:33.486822    2404 ssh_runner.go:235] Completed: cat /version.json: (4.79208s)
	I0318 13:09:33.499053    2404 ssh_runner.go:195] Run: systemctl --version
	I0318 13:09:33.508368    2404 command_runner.go:130] > systemd 252 (252)
	I0318 13:09:33.508368    2404 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0318 13:09:33.521461    2404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 13:09:33.529752    2404 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0318 13:09:33.530515    2404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:09:33.541707    2404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:09:33.569039    2404 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0318 13:09:33.569193    2404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:09:33.569193    2404 start.go:494] detecting cgroup driver to use...
	I0318 13:09:33.569294    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:09:33.600253    2404 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0318 13:09:33.612155    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 13:09:33.644847    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 13:09:33.664004    2404 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 13:09:33.675047    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 13:09:33.704911    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 13:09:33.735406    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 13:09:33.765953    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 13:09:33.797558    2404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:09:33.831920    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 13:09:33.862692    2404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:09:33.878864    2404 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0318 13:09:33.890856    2404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:09:33.918379    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:09:34.088408    2404 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 13:09:34.118765    2404 start.go:494] detecting cgroup driver to use...
	I0318 13:09:34.129249    2404 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 13:09:34.150660    2404 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0318 13:09:34.150660    2404 command_runner.go:130] > [Unit]
	I0318 13:09:34.150748    2404 command_runner.go:130] > Description=Docker Application Container Engine
	I0318 13:09:34.150748    2404 command_runner.go:130] > Documentation=https://docs.docker.com
	I0318 13:09:34.150748    2404 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0318 13:09:34.150748    2404 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0318 13:09:34.150843    2404 command_runner.go:130] > StartLimitBurst=3
	I0318 13:09:34.150861    2404 command_runner.go:130] > StartLimitIntervalSec=60
	I0318 13:09:34.150861    2404 command_runner.go:130] > [Service]
	I0318 13:09:34.150861    2404 command_runner.go:130] > Type=notify
	I0318 13:09:34.150861    2404 command_runner.go:130] > Restart=on-failure
	I0318 13:09:34.150861    2404 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0318 13:09:34.150861    2404 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0318 13:09:34.150949    2404 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0318 13:09:34.150949    2404 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0318 13:09:34.151064    2404 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0318 13:09:34.151078    2404 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0318 13:09:34.151078    2404 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0318 13:09:34.151078    2404 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0318 13:09:34.151078    2404 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0318 13:09:34.151078    2404 command_runner.go:130] > ExecStart=
	I0318 13:09:34.151078    2404 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0318 13:09:34.151078    2404 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0318 13:09:34.151078    2404 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0318 13:09:34.151078    2404 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0318 13:09:34.151078    2404 command_runner.go:130] > LimitNOFILE=infinity
	I0318 13:09:34.151078    2404 command_runner.go:130] > LimitNPROC=infinity
	I0318 13:09:34.151078    2404 command_runner.go:130] > LimitCORE=infinity
	I0318 13:09:34.151078    2404 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0318 13:09:34.151078    2404 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0318 13:09:34.151078    2404 command_runner.go:130] > TasksMax=infinity
	I0318 13:09:34.151078    2404 command_runner.go:130] > TimeoutStartSec=0
	I0318 13:09:34.151078    2404 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0318 13:09:34.151078    2404 command_runner.go:130] > Delegate=yes
	I0318 13:09:34.151078    2404 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0318 13:09:34.151078    2404 command_runner.go:130] > KillMode=process
	I0318 13:09:34.151078    2404 command_runner.go:130] > [Install]
	I0318 13:09:34.151078    2404 command_runner.go:130] > WantedBy=multi-user.target
	I0318 13:09:34.163404    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:09:34.192329    2404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:09:34.225544    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:09:34.257628    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 13:09:34.292336    2404 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 13:09:34.351535    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 13:09:34.373313    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:09:34.402869    2404 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0318 13:09:34.415121    2404 ssh_runner.go:195] Run: which cri-dockerd
	I0318 13:09:34.420935    2404 command_runner.go:130] > /usr/bin/cri-dockerd
	I0318 13:09:34.434222    2404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 13:09:34.450633    2404 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 13:09:34.492842    2404 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 13:09:34.680182    2404 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 13:09:34.853219    2404 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 13:09:34.853219    2404 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 13:09:34.899827    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:09:35.095362    2404 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 13:09:37.686146    2404 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5907642s)
	I0318 13:09:37.698930    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 13:09:37.731408    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 13:09:37.766642    2404 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 13:09:37.952394    2404 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 13:09:38.130282    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:09:38.317159    2404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 13:09:38.357940    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 13:09:38.390672    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:09:38.584237    2404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 13:09:38.680542    2404 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 13:09:38.693517    2404 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 13:09:38.705824    2404 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0318 13:09:38.705824    2404 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0318 13:09:38.705824    2404 command_runner.go:130] > Device: 0,22	Inode: 859         Links: 1
	I0318 13:09:38.705824    2404 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0318 13:09:38.706363    2404 command_runner.go:130] > Access: 2024-03-18 13:09:38.608116193 +0000
	I0318 13:09:38.706363    2404 command_runner.go:130] > Modify: 2024-03-18 13:09:38.608116193 +0000
	I0318 13:09:38.706363    2404 command_runner.go:130] > Change: 2024-03-18 13:09:38.610116200 +0000
	I0318 13:09:38.706427    2404 command_runner.go:130] >  Birth: -
	I0318 13:09:38.706427    2404 start.go:562] Will wait 60s for crictl version
	I0318 13:09:38.719304    2404 ssh_runner.go:195] Run: which crictl
	I0318 13:09:38.724279    2404 command_runner.go:130] > /usr/bin/crictl
	I0318 13:09:38.736042    2404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:09:38.796088    2404 command_runner.go:130] > Version:  0.1.0
	I0318 13:09:38.796088    2404 command_runner.go:130] > RuntimeName:  docker
	I0318 13:09:38.796088    2404 command_runner.go:130] > RuntimeVersion:  25.0.4
	I0318 13:09:38.796088    2404 command_runner.go:130] > RuntimeApiVersion:  v1
	I0318 13:09:38.798457    2404 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 13:09:38.807618    2404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 13:09:38.837315    2404 command_runner.go:130] > 25.0.4
	I0318 13:09:38.846750    2404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 13:09:38.876873    2404 command_runner.go:130] > 25.0.4
	I0318 13:09:38.881279    2404 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 13:09:38.881468    2404 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 13:09:38.886270    2404 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 13:09:38.886270    2404 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 13:09:38.886270    2404 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 13:09:38.886270    2404 ip.go:207] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:93:df:b5 Flags:up|broadcast|multicast|running}
	I0318 13:09:38.889885    2404 ip.go:210] interface addr: fe80::572b:6b1d:9130:e88b/64
	I0318 13:09:38.889885    2404 ip.go:210] interface addr: 172.30.128.1/20
	I0318 13:09:38.901339    2404 ssh_runner.go:195] Run: grep 172.30.128.1	host.minikube.internal$ /etc/hosts
	I0318 13:09:38.907809    2404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:09:38.928952    2404 kubeadm.go:877] updating cluster {Name:multinode-894400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-894400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.130.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.30.140.66 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.30.137.140 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:09:38.929175    2404 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:09:38.937996    2404 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 13:09:38.962008    2404 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0318 13:09:38.962492    2404 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0318 13:09:38.962492    2404 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0318 13:09:38.962492    2404 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0318 13:09:38.962584    2404 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0318 13:09:38.962584    2404 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0318 13:09:38.962614    2404 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0318 13:09:38.962641    2404 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0318 13:09:38.962641    2404 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:09:38.962641    2404 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0318 13:09:38.962641    2404 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0318 13:09:38.962641    2404 docker.go:615] Images already preloaded, skipping extraction
	I0318 13:09:38.972565    2404 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 13:09:38.995417    2404 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0318 13:09:38.996332    2404 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0318 13:09:38.996332    2404 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0318 13:09:38.996332    2404 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0318 13:09:38.996332    2404 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0318 13:09:38.996332    2404 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0318 13:09:38.996332    2404 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0318 13:09:38.996446    2404 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0318 13:09:38.996446    2404 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:09:38.996479    2404 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0318 13:09:38.996506    2404 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0318 13:09:38.996506    2404 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:09:38.996506    2404 kubeadm.go:928] updating node { 172.30.130.156 8443 v1.28.4 docker true true} ...
	I0318 13:09:38.996506    2404 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-894400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.130.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-894400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:09:39.006126    2404 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 13:09:39.036444    2404 command_runner.go:130] > cgroupfs
	I0318 13:09:39.036444    2404 cni.go:84] Creating CNI manager for ""
	I0318 13:09:39.036444    2404 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 13:09:39.036444    2404 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:09:39.037881    2404 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.30.130.156 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-894400 NodeName:multinode-894400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.30.130.156"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.30.130.156 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:09:39.038146    2404 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.30.130.156
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-894400"
	  kubeletExtraArgs:
	    node-ip: 172.30.130.156
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.30.130.156"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:09:39.049182    2404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:09:39.066831    2404 command_runner.go:130] > kubeadm
	I0318 13:09:39.066831    2404 command_runner.go:130] > kubectl
	I0318 13:09:39.066831    2404 command_runner.go:130] > kubelet
	I0318 13:09:39.066831    2404 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:09:39.081918    2404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:09:39.097912    2404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0318 13:09:39.129972    2404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:09:39.156394    2404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0318 13:09:39.194539    2404 ssh_runner.go:195] Run: grep 172.30.130.156	control-plane.minikube.internal$ /etc/hosts
	I0318 13:09:39.205168    2404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.130.156	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:09:39.234862    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:09:39.407827    2404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:09:39.434793    2404 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400 for IP: 172.30.130.156
	I0318 13:09:39.434793    2404 certs.go:194] generating shared ca certs ...
	I0318 13:09:39.434793    2404 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:09:39.435718    2404 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0318 13:09:39.435718    2404 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0318 13:09:39.436544    2404 certs.go:256] generating profile certs ...
	I0318 13:09:39.437155    2404 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\client.key
	I0318 13:09:39.437437    2404 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.key.baaba412
	I0318 13:09:39.437598    2404 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.crt.baaba412 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.30.130.156]
	I0318 13:09:39.712914    2404 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.crt.baaba412 ...
	I0318 13:09:39.712914    2404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.crt.baaba412: {Name:mk86007a66db8875a8e76aadb0d07e30bab7a6f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:09:39.715897    2404 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.key.baaba412 ...
	I0318 13:09:39.715897    2404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.key.baaba412: {Name:mkc3cdba84d6ccf012b0c63dc9d3bfe98ff83392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:09:39.716272    2404 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.crt.baaba412 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.crt
	I0318 13:09:39.729467    2404 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.key.baaba412 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.key
	I0318 13:09:39.730466    2404 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\proxy-client.key
	I0318 13:09:39.730466    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 13:09:39.731241    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 13:09:39.731241    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 13:09:39.731241    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 13:09:39.731241    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 13:09:39.731883    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 13:09:39.732155    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 13:09:39.732315    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 13:09:39.733110    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem (1338 bytes)
	W0318 13:09:39.733482    2404 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424_empty.pem, impossibly tiny 0 bytes
	I0318 13:09:39.733555    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0318 13:09:39.733775    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 13:09:39.734124    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 13:09:39.734449    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 13:09:39.735016    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem (1708 bytes)
	I0318 13:09:39.735016    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem -> /usr/share/ca-certificates/13424.pem
	I0318 13:09:39.735658    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /usr/share/ca-certificates/134242.pem
	I0318 13:09:39.735658    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:09:39.737329    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:09:39.785980    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0318 13:09:39.826494    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:09:39.867125    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:09:39.914213    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 13:09:39.956243    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:09:39.998092    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:09:40.038883    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:09:40.089954    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem --> /usr/share/ca-certificates/13424.pem (1338 bytes)
	I0318 13:09:40.132642    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /usr/share/ca-certificates/134242.pem (1708 bytes)
	I0318 13:09:40.174629    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:09:40.215023    2404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:09:40.254395    2404 ssh_runner.go:195] Run: openssl version
	I0318 13:09:40.262410    2404 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0318 13:09:40.273120    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13424.pem && ln -fs /usr/share/ca-certificates/13424.pem /etc/ssl/certs/13424.pem"
	I0318 13:09:40.304194    2404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13424.pem
	I0318 13:09:40.311030    2404 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 18 11:25 /usr/share/ca-certificates/13424.pem
	I0318 13:09:40.311030    2404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 11:25 /usr/share/ca-certificates/13424.pem
	I0318 13:09:40.323636    2404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13424.pem
	I0318 13:09:40.331568    2404 command_runner.go:130] > 51391683
	I0318 13:09:40.342995    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13424.pem /etc/ssl/certs/51391683.0"
	I0318 13:09:40.371658    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134242.pem && ln -fs /usr/share/ca-certificates/134242.pem /etc/ssl/certs/134242.pem"
	I0318 13:09:40.401160    2404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134242.pem
	I0318 13:09:40.406834    2404 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 18 11:25 /usr/share/ca-certificates/134242.pem
	I0318 13:09:40.407481    2404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 11:25 /usr/share/ca-certificates/134242.pem
	I0318 13:09:40.418096    2404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134242.pem
	I0318 13:09:40.425794    2404 command_runner.go:130] > 3ec20f2e
	I0318 13:09:40.437248    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134242.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:09:40.466974    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:09:40.500473    2404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:09:40.507176    2404 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 18 11:07 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:09:40.507176    2404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 11:07 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:09:40.517840    2404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:09:40.526072    2404 command_runner.go:130] > b5213941
	I0318 13:09:40.537758    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:09:40.568154    2404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:09:40.575081    2404 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:09:40.575173    2404 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0318 13:09:40.575173    2404 command_runner.go:130] > Device: 8,1	Inode: 6289189     Links: 1
	I0318 13:09:40.575173    2404 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 13:09:40.575173    2404 command_runner.go:130] > Access: 2024-03-18 12:47:17.373043899 +0000
	I0318 13:09:40.575173    2404 command_runner.go:130] > Modify: 2024-03-18 12:47:17.373043899 +0000
	I0318 13:09:40.575173    2404 command_runner.go:130] > Change: 2024-03-18 12:47:17.373043899 +0000
	I0318 13:09:40.575294    2404 command_runner.go:130] >  Birth: 2024-03-18 12:47:17.373043899 +0000
	I0318 13:09:40.589504    2404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:09:40.597947    2404 command_runner.go:130] > Certificate will not expire
	I0318 13:09:40.608725    2404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:09:40.616313    2404 command_runner.go:130] > Certificate will not expire
	I0318 13:09:40.627502    2404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:09:40.638166    2404 command_runner.go:130] > Certificate will not expire
	I0318 13:09:40.649969    2404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:09:40.658562    2404 command_runner.go:130] > Certificate will not expire
	I0318 13:09:40.668689    2404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:09:40.676142    2404 command_runner.go:130] > Certificate will not expire
	I0318 13:09:40.686565    2404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:09:40.694068    2404 command_runner.go:130] > Certificate will not expire
	I0318 13:09:40.694457    2404 kubeadm.go:391] StartCluster: {Name:multinode-894400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:multinode-894400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.130.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.30.140.66 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.30.137.140 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:09:40.702669    2404 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 13:09:40.736605    2404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 13:09:40.753290    2404 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0318 13:09:40.753391    2404 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0318 13:09:40.753391    2404 command_runner.go:130] > /var/lib/minikube/etcd:
	I0318 13:09:40.753391    2404 command_runner.go:130] > member
	W0318 13:09:40.753391    2404 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:09:40.753391    2404 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:09:40.753391    2404 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:09:40.765410    2404 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:09:40.781543    2404 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:09:40.782812    2404 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-894400" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 13:09:40.783580    2404 kubeconfig.go:62] C:\Users\jenkins.minikube3\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-894400" cluster setting kubeconfig missing "multinode-894400" context setting]
	I0318 13:09:40.784322    2404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:09:40.797036    2404 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 13:09:40.798071    2404 kapi.go:59] client config for multinode-894400: &rest.Config{Host:"https://172.30.130.156:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-894400/client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-894400/client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e8b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 13:09:40.799221    2404 cert_rotation.go:137] Starting client certificate rotation controller
	I0318 13:09:40.811770    2404 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:09:40.828684    2404 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0318 13:09:40.828863    2404 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:09:40.828863    2404 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0318 13:09:40.828863    2404 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0318 13:09:40.828935    2404 command_runner.go:130] >  kind: InitConfiguration
	I0318 13:09:40.828935    2404 command_runner.go:130] >  localAPIEndpoint:
	I0318 13:09:40.828935    2404 command_runner.go:130] > -  advertiseAddress: 172.30.129.141
	I0318 13:09:40.828935    2404 command_runner.go:130] > +  advertiseAddress: 172.30.130.156
	I0318 13:09:40.828935    2404 command_runner.go:130] >    bindPort: 8443
	I0318 13:09:40.828935    2404 command_runner.go:130] >  bootstrapTokens:
	I0318 13:09:40.828935    2404 command_runner.go:130] >    - groups:
	I0318 13:09:40.828935    2404 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0318 13:09:40.829044    2404 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0318 13:09:40.829044    2404 command_runner.go:130] >    name: "multinode-894400"
	I0318 13:09:40.829044    2404 command_runner.go:130] >    kubeletExtraArgs:
	I0318 13:09:40.829101    2404 command_runner.go:130] > -    node-ip: 172.30.129.141
	I0318 13:09:40.829101    2404 command_runner.go:130] > +    node-ip: 172.30.130.156
	I0318 13:09:40.829101    2404 command_runner.go:130] >    taints: []
	I0318 13:09:40.829101    2404 command_runner.go:130] >  ---
	I0318 13:09:40.829151    2404 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0318 13:09:40.829151    2404 command_runner.go:130] >  kind: ClusterConfiguration
	I0318 13:09:40.829186    2404 command_runner.go:130] >  apiServer:
	I0318 13:09:40.829186    2404 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.30.129.141"]
	I0318 13:09:40.829186    2404 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.30.130.156"]
	I0318 13:09:40.829233    2404 command_runner.go:130] >    extraArgs:
	I0318 13:09:40.829267    2404 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0318 13:09:40.829314    2404 command_runner.go:130] >  controllerManager:
	I0318 13:09:40.829348    2404 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.30.129.141
	+  advertiseAddress: 172.30.130.156
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-894400"
	   kubeletExtraArgs:
	-    node-ip: 172.30.129.141
	+    node-ip: 172.30.130.156
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.30.129.141"]
	+  certSANs: ["127.0.0.1", "localhost", "172.30.130.156"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0318 13:09:40.829379    2404 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:09:40.837879    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 13:09:40.865233    2404 command_runner.go:130] > 693a64f7472f
	I0318 13:09:40.865265    2404 command_runner.go:130] > a2c499223090
	I0318 13:09:40.865265    2404 command_runner.go:130] > 265b39e386cf
	I0318 13:09:40.865265    2404 command_runner.go:130] > d001e299e996
	I0318 13:09:40.865315    2404 command_runner.go:130] > c4d7018ad23a
	I0318 13:09:40.865315    2404 command_runner.go:130] > 9335855aab63
	I0318 13:09:40.865348    2404 command_runner.go:130] > a47b1fb60692
	I0318 13:09:40.865348    2404 command_runner.go:130] > 60e9cd749c8f
	I0318 13:09:40.865348    2404 command_runner.go:130] > e4d42739ce0e
	I0318 13:09:40.865348    2404 command_runner.go:130] > 7aa5cf4ec378
	I0318 13:09:40.865381    2404 command_runner.go:130] > c51f768a2f64
	I0318 13:09:40.865381    2404 command_runner.go:130] > 56d1819beb10
	I0318 13:09:40.865381    2404 command_runner.go:130] > acffce2e7384
	I0318 13:09:40.865433    2404 command_runner.go:130] > 220884cbf1f5
	I0318 13:09:40.865433    2404 command_runner.go:130] > 82710777e700
	I0318 13:09:40.865433    2404 command_runner.go:130] > 5485f509825d
	I0318 13:09:40.865489    2404 docker.go:483] Stopping containers: [693a64f7472f a2c499223090 265b39e386cf d001e299e996 c4d7018ad23a 9335855aab63 a47b1fb60692 60e9cd749c8f e4d42739ce0e 7aa5cf4ec378 c51f768a2f64 56d1819beb10 acffce2e7384 220884cbf1f5 82710777e700 5485f509825d]
	I0318 13:09:40.874715    2404 ssh_runner.go:195] Run: docker stop 693a64f7472f a2c499223090 265b39e386cf d001e299e996 c4d7018ad23a 9335855aab63 a47b1fb60692 60e9cd749c8f e4d42739ce0e 7aa5cf4ec378 c51f768a2f64 56d1819beb10 acffce2e7384 220884cbf1f5 82710777e700 5485f509825d
	I0318 13:09:40.906786    2404 command_runner.go:130] > 693a64f7472f
	I0318 13:09:40.906843    2404 command_runner.go:130] > a2c499223090
	I0318 13:09:40.906843    2404 command_runner.go:130] > 265b39e386cf
	I0318 13:09:40.906843    2404 command_runner.go:130] > d001e299e996
	I0318 13:09:40.906843    2404 command_runner.go:130] > c4d7018ad23a
	I0318 13:09:40.906843    2404 command_runner.go:130] > 9335855aab63
	I0318 13:09:40.906843    2404 command_runner.go:130] > a47b1fb60692
	I0318 13:09:40.906843    2404 command_runner.go:130] > 60e9cd749c8f
	I0318 13:09:40.906843    2404 command_runner.go:130] > e4d42739ce0e
	I0318 13:09:40.906843    2404 command_runner.go:130] > 7aa5cf4ec378
	I0318 13:09:40.906843    2404 command_runner.go:130] > c51f768a2f64
	I0318 13:09:40.906962    2404 command_runner.go:130] > 56d1819beb10
	I0318 13:09:40.906962    2404 command_runner.go:130] > acffce2e7384
	I0318 13:09:40.906962    2404 command_runner.go:130] > 220884cbf1f5
	I0318 13:09:40.906962    2404 command_runner.go:130] > 82710777e700
	I0318 13:09:40.906962    2404 command_runner.go:130] > 5485f509825d
	I0318 13:09:40.918470    2404 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:09:40.954761    2404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:09:40.969765    2404 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0318 13:09:40.969765    2404 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0318 13:09:40.969765    2404 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0318 13:09:40.970064    2404 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:09:40.970358    2404 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:09:40.970387    2404 kubeadm.go:156] found existing configuration files:
	
	I0318 13:09:40.982037    2404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:09:40.996143    2404 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:09:40.996674    2404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:09:41.008077    2404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:09:41.036830    2404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:09:41.052186    2404 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:09:41.052329    2404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:09:41.063200    2404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:09:41.090707    2404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:09:41.104901    2404 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:09:41.105096    2404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:09:41.119051    2404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:09:41.145428    2404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:09:41.159784    2404 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:09:41.160275    2404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:09:41.173163    2404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:09:41.199314    2404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:09:41.216043    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:09:41.587579    2404 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:09:41.587579    2404 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0318 13:09:41.587697    2404 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0318 13:09:41.587697    2404 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:09:41.587697    2404 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0318 13:09:41.587697    2404 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:09:41.587697    2404 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0318 13:09:41.587697    2404 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0318 13:09:41.587769    2404 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:09:41.587802    2404 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:09:41.587802    2404 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:09:41.587830    2404 command_runner.go:130] > [certs] Using the existing "sa" key
	I0318 13:09:41.587864    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:09:42.399989    2404 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:09:42.400066    2404 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:09:42.400066    2404 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:09:42.400066    2404 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:09:42.400066    2404 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:09:42.400162    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:09:42.491937    2404 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:09:42.495743    2404 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:09:42.495872    2404 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0318 13:09:42.691068    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:09:42.776989    2404 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:09:42.776989    2404 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:09:42.776989    2404 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:09:42.776989    2404 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:09:42.778174    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:09:42.894438    2404 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:09:42.894505    2404 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:09:42.907141    2404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:09:43.407151    2404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:09:43.919456    2404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:09:44.410020    2404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:09:44.917562    2404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:09:44.940970    2404 command_runner.go:130] > 1904
	I0318 13:09:44.940970    2404 api_server.go:72] duration metric: took 2.0464505s to wait for apiserver process to appear ...
	I0318 13:09:44.940970    2404 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:09:44.940970    2404 api_server.go:253] Checking apiserver healthz at https://172.30.130.156:8443/healthz ...
	I0318 13:09:48.315114    2404 api_server.go:279] https://172.30.130.156:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:09:48.316138    2404 api_server.go:103] status: https://172.30.130.156:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:09:48.316138    2404 api_server.go:253] Checking apiserver healthz at https://172.30.130.156:8443/healthz ...
	I0318 13:09:48.379129    2404 api_server.go:279] https://172.30.130.156:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:09:48.379129    2404 api_server.go:103] status: https://172.30.130.156:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:09:48.441960    2404 api_server.go:253] Checking apiserver healthz at https://172.30.130.156:8443/healthz ...
	I0318 13:09:48.453249    2404 api_server.go:279] https://172.30.130.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:09:48.453481    2404 api_server.go:103] status: https://172.30.130.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:09:48.946025    2404 api_server.go:253] Checking apiserver healthz at https://172.30.130.156:8443/healthz ...
	I0318 13:09:48.961136    2404 api_server.go:279] https://172.30.130.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:09:48.961270    2404 api_server.go:103] status: https://172.30.130.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:09:49.452368    2404 api_server.go:253] Checking apiserver healthz at https://172.30.130.156:8443/healthz ...
	I0318 13:09:49.468058    2404 api_server.go:279] https://172.30.130.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:09:49.468058    2404 api_server.go:103] status: https://172.30.130.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:09:49.948754    2404 api_server.go:253] Checking apiserver healthz at https://172.30.130.156:8443/healthz ...
	I0318 13:09:49.957404    2404 api_server.go:279] https://172.30.130.156:8443/healthz returned 200:
	ok
	I0318 13:09:49.958538    2404 round_trippers.go:463] GET https://172.30.130.156:8443/version
	I0318 13:09:49.958538    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:49.958538    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:49.958538    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:49.971073    2404 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0318 13:09:49.971538    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:49.971538    2404 round_trippers.go:580]     Audit-Id: 909294db-d475-46ea-ac0b-105fe01fe502
	I0318 13:09:49.971538    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:49.971538    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:49.971538    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:49.971538    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:49.971538    2404 round_trippers.go:580]     Content-Length: 264
	I0318 13:09:49.971637    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:49 GMT
	I0318 13:09:49.971637    2404 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0318 13:09:49.971810    2404 api_server.go:141] control plane version: v1.28.4
	I0318 13:09:49.971889    2404 api_server.go:131] duration metric: took 5.0308812s to wait for apiserver health ...
	I0318 13:09:49.971889    2404 cni.go:84] Creating CNI manager for ""
	I0318 13:09:49.971889    2404 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 13:09:49.974549    2404 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0318 13:09:49.987939    2404 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0318 13:09:49.998819    2404 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0318 13:09:49.998819    2404 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0318 13:09:49.998819    2404 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0318 13:09:49.999102    2404 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 13:09:49.999102    2404 command_runner.go:130] > Access: 2024-03-18 13:08:20.432721600 +0000
	I0318 13:09:49.999102    2404 command_runner.go:130] > Modify: 2024-03-15 22:00:10.000000000 +0000
	I0318 13:09:49.999102    2404 command_runner.go:130] > Change: 2024-03-18 13:08:11.982000000 +0000
	I0318 13:09:49.999175    2404 command_runner.go:130] >  Birth: -
	I0318 13:09:49.999805    2404 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0318 13:09:49.999838    2404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0318 13:09:50.075127    2404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0318 13:09:51.626302    2404 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0318 13:09:51.626557    2404 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0318 13:09:51.626557    2404 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0318 13:09:51.626557    2404 command_runner.go:130] > daemonset.apps/kindnet configured
	I0318 13:09:51.626557    2404 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.5514184s)
	I0318 13:09:51.626669    2404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:09:51.626821    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods
	I0318 13:09:51.626821    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:51.626945    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:51.626945    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:51.636114    2404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 13:09:51.636114    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:51.636114    2404 round_trippers.go:580]     Audit-Id: 14560811-bdec-495b-b52d-00404611f8d9
	I0318 13:09:51.636114    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:51.636114    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:51.636114    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:51.636114    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:51.636114    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:51 GMT
	I0318 13:09:51.638378    2404 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1801"},"items":[{"metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83629 chars]
	I0318 13:09:51.644560    2404 system_pods.go:59] 12 kube-system pods found
	I0318 13:09:51.644560    2404 system_pods.go:61] "coredns-5dd5756b68-456tm" [1a018c55-846b-4dc2-992c-dc8fd82a6c67] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 13:09:51.644560    2404 system_pods.go:61] "etcd-multinode-894400" [d4c040b9-a604-4a0d-80ee-7436541af60c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 13:09:51.644560    2404 system_pods.go:61] "kindnet-hhsxh" [0161d239-2d85-4246-b2fa-6c7374f2ecd6] Running
	I0318 13:09:51.644560    2404 system_pods.go:61] "kindnet-k5lpg" [c5e4099b-0611-4ebd-a7a5-ecdbeb168c5b] Running
	I0318 13:09:51.644560    2404 system_pods.go:61] "kindnet-zv9tv" [c4d70517-d7fb-4344-b2a4-20e40c13ab53] Running
	I0318 13:09:51.644560    2404 system_pods.go:61] "kube-apiserver-multinode-894400" [46152b8e-0bda-427e-a1ad-c79506b56763] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 13:09:51.644560    2404 system_pods.go:61] "kube-controller-manager-multinode-894400" [4ad5fc15-53ba-4ebb-9a63-b8572cd9c834] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 13:09:51.644560    2404 system_pods.go:61] "kube-proxy-745w9" [d385fe06-f516-440d-b9ed-37c2d4a81050] Running
	I0318 13:09:51.644560    2404 system_pods.go:61] "kube-proxy-8bdmn" [5c266b8a-9665-4365-93c6-2b5f1699d3ef] Running
	I0318 13:09:51.644560    2404 system_pods.go:61] "kube-proxy-mc5tv" [0afe25f8-cbd6-412b-8698-7b547d1d49ca] Running
	I0318 13:09:51.644560    2404 system_pods.go:61] "kube-scheduler-multinode-894400" [f47703ce-5a82-466e-ac8e-ef6b8cc07e6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 13:09:51.644560    2404 system_pods.go:61] "storage-provisioner" [219bafbc-d807-44cf-9927-e4957f36ad70] Running
	I0318 13:09:51.645164    2404 system_pods.go:74] duration metric: took 18.4401ms to wait for pod list to return data ...
	I0318 13:09:51.645206    2404 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:09:51.645324    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes
	I0318 13:09:51.645324    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:51.645324    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:51.645324    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:51.651156    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:09:51.651156    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:51.651156    2404 round_trippers.go:580]     Audit-Id: 3720ddc5-c5a7-4693-b0c4-4b7816c55ad5
	I0318 13:09:51.651156    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:51.651156    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:51.651156    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:51.651156    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:51.651156    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:51 GMT
	I0318 13:09:51.651886    2404 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1801"},"items":[{"metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15629 chars]
	I0318 13:09:51.653418    2404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:09:51.653418    2404 node_conditions.go:123] node cpu capacity is 2
	I0318 13:09:51.653418    2404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:09:51.653418    2404 node_conditions.go:123] node cpu capacity is 2
	I0318 13:09:51.653418    2404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:09:51.653418    2404 node_conditions.go:123] node cpu capacity is 2
	I0318 13:09:51.653418    2404 node_conditions.go:105] duration metric: took 8.2126ms to run NodePressure ...
	I0318 13:09:51.653418    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:09:51.953031    2404 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0318 13:09:51.953084    2404 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0318 13:09:51.953084    2404 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 13:09:51.953351    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0318 13:09:51.953351    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:51.953385    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:51.953385    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:51.962942    2404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 13:09:51.962942    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:51.963346    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:51.963346    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:51 GMT
	I0318 13:09:51.963346    2404 round_trippers.go:580]     Audit-Id: 10c97923-7e05-4a16-ad4d-a9fd9e82f478
	I0318 13:09:51.963346    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:51.963346    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:51.963346    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:51.964081    2404 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1803"},"items":[{"metadata":{"name":"etcd-multinode-894400","namespace":"kube-system","uid":"d4c040b9-a604-4a0d-80ee-7436541af60c","resourceVersion":"1778","creationTimestamp":"2024-03-18T13:09:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.130.156:2379","kubernetes.io/config.hash":"743a549b698f93b8586a236f83c90556","kubernetes.io/config.mirror":"743a549b698f93b8586a236f83c90556","kubernetes.io/config.seen":"2024-03-18T13:09:42.924670260Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:09:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 29377 chars]
	I0318 13:09:51.965066    2404 kubeadm.go:733] kubelet initialised
	I0318 13:09:51.965066    2404 kubeadm.go:734] duration metric: took 11.9821ms waiting for restarted kubelet to initialise ...
	I0318 13:09:51.965066    2404 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:09:51.965636    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods
	I0318 13:09:51.965636    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:51.965636    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:51.965636    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:51.979595    2404 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0318 13:09:51.979595    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:51.979595    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:51.980105    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:51.980105    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:51.980105    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:51 GMT
	I0318 13:09:51.980105    2404 round_trippers.go:580]     Audit-Id: b5b4f7cb-3aff-429d-938f-c784e7c38705
	I0318 13:09:51.980105    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:51.982766    2404 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1803"},"items":[{"metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83629 chars]
	I0318 13:09:51.986090    2404 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-456tm" in "kube-system" namespace to be "Ready" ...
	I0318 13:09:51.986261    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:09:51.986357    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:51.986357    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:51.986357    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:51.989285    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:09:51.989285    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:51.989285    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:51.989285    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:51 GMT
	I0318 13:09:51.989500    2404 round_trippers.go:580]     Audit-Id: 36e7a0ea-ab36-41bd-b5e4-40ebd0d17c9d
	I0318 13:09:51.989500    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:51.989500    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:51.989500    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:51.989569    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:09:51.990252    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:51.990294    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:51.990294    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:51.990294    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:51.995108    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:09:51.995108    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:51.995108    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:51.995108    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:51.995108    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:51 GMT
	I0318 13:09:51.995108    2404 round_trippers.go:580]     Audit-Id: fedfcf81-2f68-44d4-a382-43a607a15370
	I0318 13:09:51.995108    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:51.995108    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:51.995108    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:51.995781    2404 pod_ready.go:97] node "multinode-894400" hosting pod "coredns-5dd5756b68-456tm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:51.995781    2404 pod_ready.go:81] duration metric: took 9.627ms for pod "coredns-5dd5756b68-456tm" in "kube-system" namespace to be "Ready" ...
	E0318 13:09:51.995781    2404 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-894400" hosting pod "coredns-5dd5756b68-456tm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:51.995781    2404 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:09:51.995781    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-894400
	I0318 13:09:51.995781    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:51.995781    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:51.995781    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:51.999185    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:51.999185    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:51.999185    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:51.999185    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:51.999185    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:52 GMT
	I0318 13:09:51.999185    2404 round_trippers.go:580]     Audit-Id: 60f45dc3-1276-4380-bfec-33039f4ee137
	I0318 13:09:51.999185    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:51.999185    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:51.999185    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-894400","namespace":"kube-system","uid":"d4c040b9-a604-4a0d-80ee-7436541af60c","resourceVersion":"1778","creationTimestamp":"2024-03-18T13:09:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.130.156:2379","kubernetes.io/config.hash":"743a549b698f93b8586a236f83c90556","kubernetes.io/config.mirror":"743a549b698f93b8586a236f83c90556","kubernetes.io/config.seen":"2024-03-18T13:09:42.924670260Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:09:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6097 chars]
	I0318 13:09:52.000209    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:52.000280    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:52.000280    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:52.000280    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:52.002530    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:09:52.002530    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:52.002530    2404 round_trippers.go:580]     Audit-Id: 169a4677-aa49-446f-be23-83eb4a404cb3
	I0318 13:09:52.002530    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:52.002530    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:52.002530    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:52.002530    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:52.002530    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:52 GMT
	I0318 13:09:52.002530    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:52.003544    2404 pod_ready.go:97] node "multinode-894400" hosting pod "etcd-multinode-894400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:52.003579    2404 pod_ready.go:81] duration metric: took 7.798ms for pod "etcd-multinode-894400" in "kube-system" namespace to be "Ready" ...
	E0318 13:09:52.003579    2404 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-894400" hosting pod "etcd-multinode-894400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:52.003579    2404 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:09:52.003579    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-894400
	I0318 13:09:52.003579    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:52.003579    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:52.003579    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:52.006211    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:09:52.006211    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:52.006211    2404 round_trippers.go:580]     Audit-Id: efc7c5ce-bcac-4590-a713-06f4c48aeb81
	I0318 13:09:52.006211    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:52.006211    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:52.006211    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:52.006533    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:52.006533    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:52 GMT
	I0318 13:09:52.006661    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-894400","namespace":"kube-system","uid":"46152b8e-0bda-427e-a1ad-c79506b56763","resourceVersion":"1775","creationTimestamp":"2024-03-18T13:09:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.30.130.156:8443","kubernetes.io/config.hash":"6096c2227c4230453f65f86ebdcd0d95","kubernetes.io/config.mirror":"6096c2227c4230453f65f86ebdcd0d95","kubernetes.io/config.seen":"2024-03-18T13:09:42.869643374Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:09:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7653 chars]
	I0318 13:09:52.007342    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:52.007342    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:52.007342    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:52.007342    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:52.009955    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:09:52.009955    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:52.009955    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:52 GMT
	I0318 13:09:52.009955    2404 round_trippers.go:580]     Audit-Id: 3021a660-98d6-406f-8abd-d699fbe437e8
	I0318 13:09:52.009955    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:52.010287    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:52.010324    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:52.010324    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:52.010554    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:52.011401    2404 pod_ready.go:97] node "multinode-894400" hosting pod "kube-apiserver-multinode-894400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:52.011401    2404 pod_ready.go:81] duration metric: took 7.8216ms for pod "kube-apiserver-multinode-894400" in "kube-system" namespace to be "Ready" ...
	E0318 13:09:52.011463    2404 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-894400" hosting pod "kube-apiserver-multinode-894400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:52.011463    2404 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:09:52.011559    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-894400
	I0318 13:09:52.011626    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:52.011626    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:52.011626    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:52.014886    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:52.014886    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:52.015186    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:52 GMT
	I0318 13:09:52.015186    2404 round_trippers.go:580]     Audit-Id: b0323efd-79ee-41e6-93b8-0f6fd8d7e8ce
	I0318 13:09:52.015186    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:52.015186    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:52.015186    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:52.015186    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:52.015565    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-894400","namespace":"kube-system","uid":"4ad5fc15-53ba-4ebb-9a63-b8572cd9c834","resourceVersion":"1772","creationTimestamp":"2024-03-18T12:47:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d340aced56ba169ecac1e3ac58ad57fe","kubernetes.io/config.mirror":"d340aced56ba169ecac1e3ac58ad57fe","kubernetes.io/config.seen":"2024-03-18T12:47:20.228444892Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7441 chars]
	I0318 13:09:52.034079    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:52.034079    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:52.034079    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:52.034079    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:52.036486    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:09:52.036486    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:52.036968    2404 round_trippers.go:580]     Audit-Id: 1439daca-15de-475c-8bd7-da33d464d264
	I0318 13:09:52.036968    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:52.036968    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:52.036968    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:52.036968    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:52.036968    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:52 GMT
	I0318 13:09:52.037264    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:52.037639    2404 pod_ready.go:97] node "multinode-894400" hosting pod "kube-controller-manager-multinode-894400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:52.037639    2404 pod_ready.go:81] duration metric: took 26.1472ms for pod "kube-controller-manager-multinode-894400" in "kube-system" namespace to be "Ready" ...
	E0318 13:09:52.037726    2404 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-894400" hosting pod "kube-controller-manager-multinode-894400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:52.037726    2404 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-745w9" in "kube-system" namespace to be "Ready" ...
	I0318 13:09:52.237501    2404 request.go:629] Waited for 199.3292ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-745w9
	I0318 13:09:52.237501    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-745w9
	I0318 13:09:52.237501    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:52.237501    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:52.237501    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:52.243333    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:09:52.244019    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:52.244019    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:52.244019    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:52.244019    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:52 GMT
	I0318 13:09:52.244019    2404 round_trippers.go:580]     Audit-Id: d0c5eefd-e08b-4fc4-8350-84f6943c6e05
	I0318 13:09:52.244019    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:52.244019    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:52.244263    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-745w9","generateName":"kube-proxy-","namespace":"kube-system","uid":"d385fe06-f516-440d-b9ed-37c2d4a81050","resourceVersion":"1698","creationTimestamp":"2024-03-18T12:55:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"da714af0-88aa-4fc4-b73d-4de837f06114","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da714af0-88aa-4fc4-b73d-4de837f06114\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5771 chars]
	I0318 13:09:52.442059    2404 request.go:629] Waited for 196.9732ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m03
	I0318 13:09:52.442318    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m03
	I0318 13:09:52.442318    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:52.442318    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:52.442318    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:52.446437    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:09:52.446589    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:52.446589    2404 round_trippers.go:580]     Audit-Id: 2f288a64-d22d-4034-990c-5ba48f96f3ff
	I0318 13:09:52.446589    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:52.446589    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:52.446589    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:52.446589    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:52.446589    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:52 GMT
	I0318 13:09:52.446589    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m03","uid":"1f8e594e-d4cc-4247-8064-01ac67ea2b15","resourceVersion":"1707","creationTimestamp":"2024-03-18T13:05:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_05_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:05:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4400 chars]
	I0318 13:09:52.447429    2404 pod_ready.go:97] node "multinode-894400-m03" hosting pod "kube-proxy-745w9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400-m03" has status "Ready":"Unknown"
	I0318 13:09:52.447481    2404 pod_ready.go:81] duration metric: took 409.752ms for pod "kube-proxy-745w9" in "kube-system" namespace to be "Ready" ...
	E0318 13:09:52.447550    2404 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-894400-m03" hosting pod "kube-proxy-745w9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400-m03" has status "Ready":"Unknown"
	I0318 13:09:52.447550    2404 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8bdmn" in "kube-system" namespace to be "Ready" ...
	I0318 13:09:52.626849    2404 request.go:629] Waited for 179.2984ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8bdmn
	I0318 13:09:52.627152    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8bdmn
	I0318 13:09:52.627152    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:52.627508    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:52.627508    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:52.630294    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:09:52.631130    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:52.631130    2404 round_trippers.go:580]     Audit-Id: 212cc27c-2521-4ddf-aeef-4b8764d88083
	I0318 13:09:52.631130    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:52.631130    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:52.631130    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:52.631130    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:52.631130    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:52 GMT
	I0318 13:09:52.631201    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8bdmn","generateName":"kube-proxy-","namespace":"kube-system","uid":"5c266b8a-9665-4365-93c6-2b5f1699d3ef","resourceVersion":"616","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"da714af0-88aa-4fc4-b73d-4de837f06114","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da714af0-88aa-4fc4-b73d-4de837f06114\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0318 13:09:52.831471    2404 request.go:629] Waited for 199.1963ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:09:52.831678    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:09:52.831874    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:52.831933    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:52.832012    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:52.835679    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:52.835679    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:52.835679    2404 round_trippers.go:580]     Audit-Id: 461cb120-a61d-4461-ac50-b06be9a427b6
	I0318 13:09:52.835679    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:52.835679    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:52.835679    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:52.835679    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:52.835679    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:52 GMT
	I0318 13:09:52.835679    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"1345","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3826 chars]
	I0318 13:09:52.835679    2404 pod_ready.go:92] pod "kube-proxy-8bdmn" in "kube-system" namespace has status "Ready":"True"
	I0318 13:09:52.835679    2404 pod_ready.go:81] duration metric: took 388.1265ms for pod "kube-proxy-8bdmn" in "kube-system" namespace to be "Ready" ...
	I0318 13:09:52.835679    2404 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mc5tv" in "kube-system" namespace to be "Ready" ...
	I0318 13:09:53.035775    2404 request.go:629] Waited for 199.9712ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mc5tv
	I0318 13:09:53.035829    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mc5tv
	I0318 13:09:53.035829    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:53.035829    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:53.035829    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:53.038462    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:09:53.039188    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:53.039188    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:53.039188    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:53.039188    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:53.039188    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:53.039188    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:53 GMT
	I0318 13:09:53.039188    2404 round_trippers.go:580]     Audit-Id: 3ca6caa1-0df5-409b-ad7c-0bd40aeac4c3
	I0318 13:09:53.039535    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mc5tv","generateName":"kube-proxy-","namespace":"kube-system","uid":"0afe25f8-cbd6-412b-8698-7b547d1d49ca","resourceVersion":"1799","creationTimestamp":"2024-03-18T12:47:41Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"da714af0-88aa-4fc4-b73d-4de837f06114","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da714af0-88aa-4fc4-b73d-4de837f06114\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0318 13:09:53.242411    2404 request.go:629] Waited for 202.0444ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:53.242523    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:53.242523    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:53.242587    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:53.242587    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:53.246360    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:53.246360    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:53.246360    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:53.246360    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:53.246360    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:53.246360    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:53 GMT
	I0318 13:09:53.246963    2404 round_trippers.go:580]     Audit-Id: 05b6cf01-5d1b-4731-89b2-4d5b223cc296
	I0318 13:09:53.246963    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:53.246963    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:53.247605    2404 pod_ready.go:97] node "multinode-894400" hosting pod "kube-proxy-mc5tv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:53.247702    2404 pod_ready.go:81] duration metric: took 412.0205ms for pod "kube-proxy-mc5tv" in "kube-system" namespace to be "Ready" ...
	E0318 13:09:53.247759    2404 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-894400" hosting pod "kube-proxy-mc5tv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:53.247759    2404 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:09:53.429789    2404 request.go:629] Waited for 181.7398ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-894400
	I0318 13:09:53.429968    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-894400
	I0318 13:09:53.429968    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:53.429968    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:53.429968    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:53.433783    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:53.433783    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:53.433783    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:53.434638    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:53 GMT
	I0318 13:09:53.434638    2404 round_trippers.go:580]     Audit-Id: 4a82783f-c202-4a24-ad98-cdf12d30e15d
	I0318 13:09:53.434638    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:53.434638    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:53.434638    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:53.434778    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-894400","namespace":"kube-system","uid":"f47703ce-5a82-466e-ac8e-ef6b8cc07e6c","resourceVersion":"1762","creationTimestamp":"2024-03-18T12:47:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1c745e9b917877b1ff3c90ed02e9a79a","kubernetes.io/config.mirror":"1c745e9b917877b1ff3c90ed02e9a79a","kubernetes.io/config.seen":"2024-03-18T12:47:28.428225123Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5153 chars]
	I0318 13:09:53.632095    2404 request.go:629] Waited for 196.9063ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:53.632095    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:53.632095    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:53.632095    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:53.632095    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:53.635683    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:53.635683    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:53.636014    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:53.636014    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:53.636014    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:53.636014    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:53.636014    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:53 GMT
	I0318 13:09:53.636014    2404 round_trippers.go:580]     Audit-Id: 8c780d1b-63c9-4028-86ac-c68659dde07d
	I0318 13:09:53.636695    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:53.637093    2404 pod_ready.go:97] node "multinode-894400" hosting pod "kube-scheduler-multinode-894400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:53.637093    2404 pod_ready.go:81] duration metric: took 389.2836ms for pod "kube-scheduler-multinode-894400" in "kube-system" namespace to be "Ready" ...
	E0318 13:09:53.637093    2404 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-894400" hosting pod "kube-scheduler-multinode-894400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:53.637093    2404 pod_ready.go:38] duration metric: took 1.6720145s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:09:53.637093    2404 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:09:53.653850    2404 command_runner.go:130] > -16
	I0318 13:09:53.654305    2404 ops.go:34] apiserver oom_adj: -16
	I0318 13:09:53.654305    2404 kubeadm.go:591] duration metric: took 12.9008183s to restartPrimaryControlPlane
	I0318 13:09:53.654361    2404 kubeadm.go:393] duration metric: took 12.9598071s to StartCluster
	I0318 13:09:53.654361    2404 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:09:53.654652    2404 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 13:09:53.656267    2404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:09:53.658182    2404 start.go:234] Will wait 6m0s for node &{Name: IP:172.30.130.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:09:53.662337    2404 out.go:177] * Verifying Kubernetes components...
	I0318 13:09:53.658182    2404 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:09:53.658767    2404 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:09:53.668540    2404 out.go:177] * Enabled addons: 
	I0318 13:09:53.670987    2404 addons.go:505] duration metric: took 12.2709ms for enable addons: enabled=[]
	I0318 13:09:53.675869    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:09:53.917393    2404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:09:53.944942    2404 node_ready.go:35] waiting up to 6m0s for node "multinode-894400" to be "Ready" ...
	I0318 13:09:53.945183    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:53.945183    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:53.945183    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:53.945183    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:53.948355    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:53.949263    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:53.949263    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:53.949263    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:53.949263    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:53.949263    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:53.949263    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:53 GMT
	I0318 13:09:53.949263    2404 round_trippers.go:580]     Audit-Id: 730ea6ed-2dd9-4904-97c5-43257ad5bf32
	I0318 13:09:53.949576    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:54.459279    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:54.459279    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:54.459279    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:54.459279    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:54.462970    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:54.462970    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:54.462970    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:54.463433    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:54 GMT
	I0318 13:09:54.463433    2404 round_trippers.go:580]     Audit-Id: e4c5d98c-7113-44ae-9b32-b652fddb7cdd
	I0318 13:09:54.463433    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:54.463433    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:54.463433    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:54.463817    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:54.957589    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:54.957675    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:54.957675    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:54.957675    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:54.961749    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:54.961840    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:54.961840    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:54.961840    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:54 GMT
	I0318 13:09:54.961840    2404 round_trippers.go:580]     Audit-Id: dd96c3c8-8018-405c-9213-cd33d6cfa45f
	I0318 13:09:54.961840    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:54.961840    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:54.961840    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:54.962365    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:55.448533    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:55.448533    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:55.448533    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:55.448533    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:55.452957    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:55.452957    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:55.452957    2404 round_trippers.go:580]     Audit-Id: 41faac3d-1074-482f-a0aa-61c5518b32e2
	I0318 13:09:55.452957    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:55.453052    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:55.453052    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:55.453052    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:55.453052    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:55 GMT
	I0318 13:09:55.453363    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:55.948537    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:55.948537    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:55.948537    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:55.948537    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:55.952848    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:09:55.952848    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:55.952848    2404 round_trippers.go:580]     Audit-Id: cfee6150-59e1-468e-8834-4cda3fecae10
	I0318 13:09:55.952848    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:55.952848    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:55.952848    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:55.952848    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:55.952848    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:55 GMT
	I0318 13:09:55.952848    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:55.953693    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:09:56.449067    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:56.449067    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:56.449067    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:56.449067    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:56.454121    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:09:56.454369    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:56.454369    2404 round_trippers.go:580]     Audit-Id: 36a881ef-1c40-4d41-a192-f78c8e7a744f
	I0318 13:09:56.454369    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:56.454369    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:56.454369    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:56.454369    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:56.454369    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:56 GMT
	I0318 13:09:56.454733    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:56.946172    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:56.946224    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:56.946224    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:56.946224    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:56.949556    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:56.949556    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:56.949556    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:56.949556    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:56.949556    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:56.949556    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:56 GMT
	I0318 13:09:56.949556    2404 round_trippers.go:580]     Audit-Id: 0f94ae2c-c9f6-4e3f-b64a-883588570d78
	I0318 13:09:56.949556    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:56.949556    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:57.445948    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:57.446019    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:57.446019    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:57.446019    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:57.448877    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:09:57.449760    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:57.449760    2404 round_trippers.go:580]     Audit-Id: cf5d93c7-048b-4248-8629-0f2c0a325eb2
	I0318 13:09:57.449760    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:57.449760    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:57.449760    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:57.449760    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:57.449760    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:57 GMT
	I0318 13:09:57.450155    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:57.960312    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:57.960371    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:57.960470    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:57.960470    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:57.965245    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:09:57.965348    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:57.965348    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:57 GMT
	I0318 13:09:57.965348    2404 round_trippers.go:580]     Audit-Id: e53e6ff0-47a4-479e-afe8-fcefb4b6f1bb
	I0318 13:09:57.965403    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:57.965426    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:57.965426    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:57.965426    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:57.965453    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:57.966166    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:09:58.456057    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:58.456301    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:58.456301    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:58.456301    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:58.459666    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:58.460698    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:58.460698    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:58.460698    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:58.460698    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:58 GMT
	I0318 13:09:58.460698    2404 round_trippers.go:580]     Audit-Id: 9e05a9ca-c19f-4c98-84cb-22ed5d045845
	I0318 13:09:58.460698    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:58.460765    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:58.461005    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:58.956036    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:58.956036    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:58.956135    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:58.956135    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:58.960461    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:09:58.960461    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:58.960461    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:58.960461    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:58.960461    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:58.960461    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:58 GMT
	I0318 13:09:58.960461    2404 round_trippers.go:580]     Audit-Id: 278d09c1-e53b-4c49-993b-4d51df97fe40
	I0318 13:09:58.961032    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:58.961111    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:59.457416    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:59.457487    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:59.457555    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:59.457555    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:59.461185    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:59.461678    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:59.461678    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:59.461678    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:59.461678    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:59.461678    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:59 GMT
	I0318 13:09:59.461678    2404 round_trippers.go:580]     Audit-Id: 6daa3fe1-eb47-45ae-acb9-10389e2a34b3
	I0318 13:09:59.461678    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:59.461844    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:59.956899    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:59.956899    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:59.956899    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:59.956899    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:59.960722    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:59.960722    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:59.960722    2404 round_trippers.go:580]     Audit-Id: 422e2362-f72c-4660-b021-2eaa3a47f678
	I0318 13:09:59.960722    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:59.960722    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:59.960722    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:59.960722    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:59.960722    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:59 GMT
	I0318 13:09:59.961013    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:10:00.459419    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:00.459419    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:00.459419    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:00.459419    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:00.466744    2404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 13:10:00.466744    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:00.466951    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:00 GMT
	I0318 13:10:00.466951    2404 round_trippers.go:580]     Audit-Id: 625bb4a4-36de-4f6d-ba2c-bf51940a57ee
	I0318 13:10:00.466951    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:00.466951    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:00.466951    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:00.466951    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:00.467048    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:10:00.467931    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:00.947111    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:00.947111    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:00.947111    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:00.947111    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:00.949726    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:00.949726    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:00.949726    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:00.949726    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:00.949726    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:00.949726    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:00 GMT
	I0318 13:10:00.950735    2404 round_trippers.go:580]     Audit-Id: 5ae0bb24-dcf0-4349-833a-aee964afb79b
	I0318 13:10:00.950735    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:00.950879    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:10:01.449177    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:01.449177    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:01.449177    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:01.449177    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:01.461940    2404 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0318 13:10:01.461940    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:01.461940    2404 round_trippers.go:580]     Audit-Id: 8eec4909-b23a-45ed-b783-5a17189584c0
	I0318 13:10:01.461940    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:01.461940    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:01.461940    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:01.461940    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:01.461940    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:01 GMT
	I0318 13:10:01.465449    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:10:01.954769    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:01.954769    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:01.954769    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:01.954769    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:01.958728    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:01.958728    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:01.958728    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:01.958728    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:01.958728    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:01.958728    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:01 GMT
	I0318 13:10:01.959034    2404 round_trippers.go:580]     Audit-Id: fbd9a1c1-43ac-483a-86b7-88f59894dc81
	I0318 13:10:01.959235    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:01.959406    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:02.455228    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:02.455228    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:02.455228    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:02.455228    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:02.459004    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:02.459326    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:02.459326    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:02.459326    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:02.459326    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:02.459326    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:02 GMT
	I0318 13:10:02.459326    2404 round_trippers.go:580]     Audit-Id: ae3d41d2-4301-45b5-96d0-f209fe01566c
	I0318 13:10:02.459326    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:02.459692    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:02.954675    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:02.954675    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:02.954675    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:02.954675    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:02.958292    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:02.958292    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:02.958862    2404 round_trippers.go:580]     Audit-Id: 1f7769ac-4389-4f2d-a428-896ff93c16ba
	I0318 13:10:02.958862    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:02.958862    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:02.958862    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:02.958862    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:02.958862    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:02 GMT
	I0318 13:10:02.959027    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:02.959607    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:03.459007    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:03.459007    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:03.459007    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:03.459007    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:03.462612    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:03.462931    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:03.462931    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:03.462931    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:03 GMT
	I0318 13:10:03.462931    2404 round_trippers.go:580]     Audit-Id: 6423ab1d-6abb-4838-b948-a7cab1155a51
	I0318 13:10:03.462931    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:03.462931    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:03.462931    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:03.463390    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:03.960075    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:03.960075    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:03.960075    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:03.960075    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:03.964274    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:03.965290    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:03.965290    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:03 GMT
	I0318 13:10:03.965369    2404 round_trippers.go:580]     Audit-Id: 290ec6cf-bbe7-486f-8470-c2a3cf3fcfcb
	I0318 13:10:03.965369    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:03.965441    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:03.965441    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:03.965441    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:03.965726    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:04.446219    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:04.446219    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:04.446219    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:04.446219    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:04.451010    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:04.451460    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:04.451460    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:04.451460    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:04.451460    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:04.451460    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:04 GMT
	I0318 13:10:04.451460    2404 round_trippers.go:580]     Audit-Id: 99b92c92-ea08-45f3-8925-36013b9cc552
	I0318 13:10:04.451460    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:04.451774    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:04.957914    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:04.957914    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:04.957914    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:04.957914    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:04.961502    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:04.961502    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:04.961502    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:04 GMT
	I0318 13:10:04.962357    2404 round_trippers.go:580]     Audit-Id: 3a6828d5-e4c8-4395-b591-40569713e8a4
	I0318 13:10:04.962357    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:04.962357    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:04.962357    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:04.962357    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:04.962585    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:04.963071    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:05.445608    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:05.445670    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:05.445670    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:05.445670    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:05.450095    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:05.450095    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:05.450095    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:05.450095    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:05.450095    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:05 GMT
	I0318 13:10:05.450095    2404 round_trippers.go:580]     Audit-Id: 4d47e2dc-b94b-4d5e-aa01-c98ac95300ef
	I0318 13:10:05.450095    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:05.450095    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:05.450095    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:05.960627    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:05.960627    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:05.960627    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:05.960627    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:05.964233    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:05.965266    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:05.965266    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:05.965266    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:05.965266    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:05.965329    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:05.965329    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:05 GMT
	I0318 13:10:05.965329    2404 round_trippers.go:580]     Audit-Id: d03f6f45-a2fe-41fb-bf32-078eae00249b
	I0318 13:10:05.965434    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:06.446471    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:06.446544    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:06.446544    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:06.446544    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:06.450365    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:06.450541    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:06.450541    2404 round_trippers.go:580]     Audit-Id: 40a765e0-b0f0-4802-b45d-7fa09cbc446d
	I0318 13:10:06.450541    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:06.450541    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:06.450541    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:06.450541    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:06.450541    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:06 GMT
	I0318 13:10:06.450673    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:06.958263    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:06.958502    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:06.958502    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:06.958502    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:06.963804    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:06.963804    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:06.963804    2404 round_trippers.go:580]     Audit-Id: 3feca258-db3b-4c16-9451-f2d1f6c409e8
	I0318 13:10:06.963804    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:06.963804    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:06.963804    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:06.963804    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:06.963804    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:06 GMT
	I0318 13:10:06.964033    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:06.964536    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:07.445803    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:07.446087    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:07.446087    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:07.446087    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:07.450938    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:07.451180    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:07.451180    2404 round_trippers.go:580]     Audit-Id: 77b0a11a-9223-4a51-aaf2-e165d60ddbb6
	I0318 13:10:07.451180    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:07.451180    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:07.451180    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:07.451180    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:07.451180    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:07 GMT
	I0318 13:10:07.451536    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:07.956760    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:07.957105    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:07.957105    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:07.957105    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:07.960512    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:07.960512    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:07.960512    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:07.961201    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:07.961201    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:07 GMT
	I0318 13:10:07.961201    2404 round_trippers.go:580]     Audit-Id: 184512e5-2787-4e74-b8e1-562d4e13b3c1
	I0318 13:10:07.961201    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:07.961201    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:07.961412    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:08.456318    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:08.456387    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:08.456387    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:08.456387    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:08.462687    2404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:10:08.462687    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:08.462687    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:08.462687    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:08 GMT
	I0318 13:10:08.462687    2404 round_trippers.go:580]     Audit-Id: b10193c9-d0e8-4874-b76b-7cad7c36fee4
	I0318 13:10:08.462687    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:08.462687    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:08.462687    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:08.462687    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:08.954573    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:08.954573    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:08.954573    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:08.954573    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:08.958187    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:08.958638    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:08.958695    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:08.958695    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:08.958695    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:08 GMT
	I0318 13:10:08.958695    2404 round_trippers.go:580]     Audit-Id: 92164a09-a7b1-4ab6-9357-d3c935538b7d
	I0318 13:10:08.958695    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:08.958695    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:08.958695    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:09.456393    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:09.456393    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:09.456393    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:09.456393    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:09.460106    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:09.460106    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:09.460106    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:09.460106    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:09.460106    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:09.460106    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:09.460106    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:09 GMT
	I0318 13:10:09.460106    2404 round_trippers.go:580]     Audit-Id: 0960e2a4-039b-4ac5-ba19-8555f0a5d7e2
	I0318 13:10:09.461470    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:09.462149    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:09.958808    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:09.958839    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:09.958839    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:09.958839    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:09.963469    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:09.963469    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:09.963469    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:09.963469    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:09.963469    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:09.963469    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:09 GMT
	I0318 13:10:09.963469    2404 round_trippers.go:580]     Audit-Id: 8f10548e-9792-4cac-a79c-d5e57007488f
	I0318 13:10:09.963469    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:09.963578    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:10.460903    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:10.461000    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:10.461141    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:10.461141    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:10.466097    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:10.466432    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:10.466432    2404 round_trippers.go:580]     Audit-Id: b8845dea-90c1-4cf6-ae98-592c2e340500
	I0318 13:10:10.466432    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:10.466432    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:10.466432    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:10.466432    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:10.466432    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:10 GMT
	I0318 13:10:10.466966    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:10.960734    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:10.960791    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:10.960849    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:10.960849    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:10.965013    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:10.965071    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:10.965071    2404 round_trippers.go:580]     Audit-Id: a9718e1b-6d11-46eb-881d-1499b7e37c81
	I0318 13:10:10.965071    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:10.965071    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:10.965071    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:10.965071    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:10.965071    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:10 GMT
	I0318 13:10:10.965339    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:11.447689    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:11.447689    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:11.447689    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:11.447689    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:11.451079    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:11.451079    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:11.451466    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:11.451466    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:11.451466    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:11.451466    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:11.451466    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:11 GMT
	I0318 13:10:11.451466    2404 round_trippers.go:580]     Audit-Id: bb76a560-39dc-4620-9d19-ff6d2cf30490
	I0318 13:10:11.451947    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:11.949552    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:11.949623    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:11.949623    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:11.949623    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:11.954032    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:11.954032    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:11.954032    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:11.954032    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:11.954451    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:11.954451    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:11.954451    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:11 GMT
	I0318 13:10:11.954451    2404 round_trippers.go:580]     Audit-Id: a99b3117-253c-4208-b02f-07a6080d9472
	I0318 13:10:11.954634    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:11.955190    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:12.447838    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:12.448041    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:12.448041    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:12.448041    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:12.451469    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:12.451687    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:12.451687    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:12.451687    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:12.451687    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:12.451687    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:12.451687    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:12 GMT
	I0318 13:10:12.451687    2404 round_trippers.go:580]     Audit-Id: 9074956d-e620-4e9e-b57b-2ead3b03d5c5
	I0318 13:10:12.452290    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:12.960141    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:12.960141    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:12.960141    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:12.960141    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:12.963744    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:12.963744    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:12.963744    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:12.963744    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:12.963744    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:12.963744    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:12 GMT
	I0318 13:10:12.963744    2404 round_trippers.go:580]     Audit-Id: b76c0bce-634f-4bd5-ae41-92776d28b024
	I0318 13:10:12.963744    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:12.964718    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:13.452540    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:13.452610    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:13.452610    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:13.452610    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:13.459398    2404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:10:13.460073    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:13.460073    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:13 GMT
	I0318 13:10:13.460073    2404 round_trippers.go:580]     Audit-Id: 98780845-e469-42ec-8fec-3f27f54241b5
	I0318 13:10:13.460073    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:13.460137    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:13.460156    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:13.460156    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:13.460297    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:13.954198    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:13.954290    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:13.954290    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:13.954290    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:13.957700    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:13.958005    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:13.958005    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:13.958005    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:13.958005    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:13.958005    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:13.958005    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:13 GMT
	I0318 13:10:13.958005    2404 round_trippers.go:580]     Audit-Id: a3cd9531-543e-4677-8d64-c86ef137b2d9
	I0318 13:10:13.958294    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:13.958828    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:14.451568    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:14.451568    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:14.451568    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:14.451718    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:14.456161    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:14.456161    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:14.456161    2404 round_trippers.go:580]     Audit-Id: d1e2dba7-0656-4172-8dc9-b0479a11c7da
	I0318 13:10:14.456161    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:14.456161    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:14.456161    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:14.457136    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:14.457136    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:14 GMT
	I0318 13:10:14.457428    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:14.949706    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:14.949940    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:14.949940    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:14.949940    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:14.953457    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:14.953999    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:14.953999    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:14 GMT
	I0318 13:10:14.953999    2404 round_trippers.go:580]     Audit-Id: c41c73c9-eea4-48db-a937-7acce3f658c8
	I0318 13:10:14.953999    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:14.953999    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:14.953999    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:14.953999    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:14.954225    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:15.451441    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:15.451441    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:15.451441    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:15.451441    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:15.454753    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:15.455453    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:15.455453    2404 round_trippers.go:580]     Audit-Id: a319ed4d-3750-47b9-85c0-73c0ef5bc931
	I0318 13:10:15.455453    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:15.455453    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:15.455453    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:15.455453    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:15.455453    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:15 GMT
	I0318 13:10:15.455715    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:15.954609    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:15.954662    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:15.954662    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:15.954662    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:15.959070    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:15.959070    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:15.959070    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:15.959070    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:15 GMT
	I0318 13:10:15.959070    2404 round_trippers.go:580]     Audit-Id: 9b6007c9-c216-4f8c-978b-7773fda4d5ad
	I0318 13:10:15.959070    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:15.959070    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:15.959070    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:15.959070    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:15.959791    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:16.457469    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:16.457469    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:16.457469    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:16.457469    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:16.461061    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:16.461061    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:16.461061    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:16.461061    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:16.461061    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:16 GMT
	I0318 13:10:16.461061    2404 round_trippers.go:580]     Audit-Id: 51beb58c-1889-4dc7-93ab-699f206807fd
	I0318 13:10:16.461061    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:16.461061    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:16.461835    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:16.957913    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:16.957913    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:16.957913    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:16.957913    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:16.962673    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:16.962796    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:16.962796    2404 round_trippers.go:580]     Audit-Id: 44ee5e70-c27a-4e8d-9209-f06fd814c62c
	I0318 13:10:16.962796    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:16.962796    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:16.962796    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:16.962796    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:16.962796    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:16 GMT
	I0318 13:10:16.963103    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:17.459123    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:17.459402    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:17.459402    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:17.459402    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:17.463671    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:17.464365    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:17.464365    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:17.464365    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:17.464365    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:17.464365    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:17 GMT
	I0318 13:10:17.464365    2404 round_trippers.go:580]     Audit-Id: a74bbc73-0d69-4b6e-bf83-15fae336077d
	I0318 13:10:17.464365    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:17.464948    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:17.957734    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:17.957983    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:17.958084    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:17.958084    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:17.961922    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:17.961922    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:17.962932    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:17.962932    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:17 GMT
	I0318 13:10:17.962932    2404 round_trippers.go:580]     Audit-Id: b73a7424-5407-4bfb-861f-466588ec9ac9
	I0318 13:10:17.962932    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:17.962932    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:17.962932    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:17.963050    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:17.963613    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:18.452683    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:18.452784    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:18.452784    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:18.452784    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:18.456739    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:18.456739    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:18.456980    2404 round_trippers.go:580]     Audit-Id: fe0fa0ed-d33f-42ca-a404-f077659b24f9
	I0318 13:10:18.456980    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:18.456980    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:18.456980    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:18.456980    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:18.456980    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:18 GMT
	I0318 13:10:18.457175    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:18.949639    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:18.949706    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:18.949706    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:18.949706    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:18.952302    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:18.953210    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:18.953210    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:18.953210    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:18.953210    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:18.953210    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:18 GMT
	I0318 13:10:18.953210    2404 round_trippers.go:580]     Audit-Id: 0f430138-4404-4c3a-90fc-f651386ca56f
	I0318 13:10:18.953210    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:18.953457    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:19.448788    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:19.448788    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:19.448788    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:19.448788    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:19.454859    2404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:10:19.454859    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:19.454859    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:19.454859    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:19.454859    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:19.454859    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:19.454859    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:19 GMT
	I0318 13:10:19.454859    2404 round_trippers.go:580]     Audit-Id: df462a2e-63bc-4b37-9796-3e53f4b3716e
	I0318 13:10:19.455556    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:19.947973    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:19.947973    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:19.947973    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:19.947973    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:19.951612    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:19.951612    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:19.951612    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:19.951612    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:19.951612    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:19.951612    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:19 GMT
	I0318 13:10:19.951612    2404 round_trippers.go:580]     Audit-Id: ecd897cd-7eba-4e8d-8cc3-b3dd8125d85d
	I0318 13:10:19.951612    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:19.954525    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:20.449467    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:20.449576    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:20.449576    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:20.449576    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:20.454171    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:20.454171    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:20.454171    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:20.454171    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:20.454171    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:20.454171    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:20 GMT
	I0318 13:10:20.454171    2404 round_trippers.go:580]     Audit-Id: 4b1a9a67-6921-41ef-a59d-2532948432a0
	I0318 13:10:20.454171    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:20.454171    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:20.455204    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:20.953929    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:20.953929    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:20.953987    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:20.953987    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:20.957506    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:20.957506    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:20.957506    2404 round_trippers.go:580]     Audit-Id: cb76fe58-aa52-4af5-9e75-9a0b96392e5a
	I0318 13:10:20.957506    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:20.957506    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:20.957506    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:20.957506    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:20.957506    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:20 GMT
	I0318 13:10:20.957506    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:21.457349    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:21.457349    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:21.457349    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:21.457349    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:21.461638    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:21.461933    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:21.461933    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:21.461933    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:21 GMT
	I0318 13:10:21.461933    2404 round_trippers.go:580]     Audit-Id: 181776df-72b6-490e-93e2-1f42fa4b3129
	I0318 13:10:21.461933    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:21.461933    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:21.461933    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:21.461933    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:21.946529    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:21.946624    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:21.946624    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:21.946624    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:21.951796    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:10:21.951796    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:21.951796    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:21.951796    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:21 GMT
	I0318 13:10:21.951796    2404 round_trippers.go:580]     Audit-Id: eda1eabb-df71-4de2-91c1-ebdcb0b290e1
	I0318 13:10:21.951796    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:21.951796    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:21.951796    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:21.951971    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:22.447221    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:22.447221    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:22.447221    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:22.447221    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:22.452096    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:22.452746    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:22.452746    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:22 GMT
	I0318 13:10:22.452746    2404 round_trippers.go:580]     Audit-Id: 1ef1cce8-ac9c-457a-a427-89c0439eb78c
	I0318 13:10:22.452746    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:22.452746    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:22.452746    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:22.452746    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:22.453145    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:22.949832    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:22.949832    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:22.949832    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:22.949921    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:22.953903    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:22.953903    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:22.953903    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:22.954010    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:22.954010    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:22.954010    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:22.954010    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:22 GMT
	I0318 13:10:22.954010    2404 round_trippers.go:580]     Audit-Id: 4e6ce44a-6058-4f9b-975b-db0d0c68119e
	I0318 13:10:22.954162    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:22.954818    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:23.455394    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:23.455394    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:23.455394    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:23.455394    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:23.458392    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:23.458392    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:23.458392    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:23 GMT
	I0318 13:10:23.458392    2404 round_trippers.go:580]     Audit-Id: 52393d7a-a7be-4560-84e8-824f4c00f7fc
	I0318 13:10:23.458392    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:23.459414    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:23.459414    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:23.459463    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:23.459615    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:23.955922    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:23.955998    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:23.955998    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:23.955998    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:23.959403    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:23.959474    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:23.959474    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:23.959474    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:23.959535    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:23 GMT
	I0318 13:10:23.959535    2404 round_trippers.go:580]     Audit-Id: ce4ed3a1-1a8e-4cf0-8f31-0a444c59da8b
	I0318 13:10:23.959535    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:23.959535    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:23.959803    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1876","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 13:10:23.960950    2404 node_ready.go:49] node "multinode-894400" has status "Ready":"True"
	I0318 13:10:23.960950    2404 node_ready.go:38] duration metric: took 30.0157853s for node "multinode-894400" to be "Ready" ...
	I0318 13:10:23.961036    2404 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:10:23.961205    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods
	I0318 13:10:23.961229    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:23.961229    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:23.961229    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:23.966538    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:23.966538    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:23.966538    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:23.966538    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:23.966538    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:23.966538    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:23.966538    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:23 GMT
	I0318 13:10:23.966538    2404 round_trippers.go:580]     Audit-Id: 76e194c8-49d5-48a1-9ad2-d3b7df829c1c
	I0318 13:10:23.967898    2404 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1876"},"items":[{"metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83068 chars]
	I0318 13:10:23.971890    2404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-456tm" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:23.971987    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:23.972093    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:23.972093    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:23.972161    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:23.976107    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:23.976107    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:23.976565    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:23 GMT
	I0318 13:10:23.976631    2404 round_trippers.go:580]     Audit-Id: 01762c1a-0ce6-4c4b-82bf-3c1a15d29800
	I0318 13:10:23.976631    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:23.976631    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:23.976685    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:23.976685    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:23.976898    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:23.977232    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:23.977232    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:23.977232    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:23.977232    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:23.980690    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:23.980910    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:23.981014    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:23.981014    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:23.981047    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:23.981047    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:23.981047    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:23 GMT
	I0318 13:10:23.981089    2404 round_trippers.go:580]     Audit-Id: f7dbb1dc-653f-4742-b559-24e9683203e0
	I0318 13:10:23.981290    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1876","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 13:10:24.486217    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:24.486322    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:24.486322    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:24.486322    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:24.489657    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:24.489657    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:24.489964    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:24 GMT
	I0318 13:10:24.489964    2404 round_trippers.go:580]     Audit-Id: e55648ea-af22-4aa1-a05f-8e23e423879e
	I0318 13:10:24.489964    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:24.489964    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:24.489964    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:24.489964    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:24.490442    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:24.491255    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:24.491322    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:24.491322    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:24.491322    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:24.494170    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:24.494170    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:24.494170    2404 round_trippers.go:580]     Audit-Id: 41a6738b-6689-4c42-8489-41edee4a73e2
	I0318 13:10:24.494170    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:24.494170    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:24.494831    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:24.494831    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:24.494831    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:24 GMT
	I0318 13:10:24.494902    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1876","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 13:10:24.986265    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:24.986265    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:24.986265    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:24.986366    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:24.990679    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:24.990679    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:24.990679    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:24.990679    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:24.990679    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:24.990679    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:24.990679    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:24 GMT
	I0318 13:10:24.990679    2404 round_trippers.go:580]     Audit-Id: 6ab673c0-267f-4ae5-94b1-781093f390ca
	I0318 13:10:24.990679    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:24.992005    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:24.992005    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:24.992138    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:24.992138    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:24.994917    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:24.995512    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:24.995512    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:24 GMT
	I0318 13:10:24.995512    2404 round_trippers.go:580]     Audit-Id: a07a0c26-0872-4870-8af8-2ff57e789f5c
	I0318 13:10:24.995512    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:24.995512    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:24.995512    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:24.995512    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:24.995735    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1876","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 13:10:25.485437    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:25.485437    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:25.485437    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:25.485437    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:25.490035    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:25.490035    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:25.490035    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:25.490035    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:25 GMT
	I0318 13:10:25.491072    2404 round_trippers.go:580]     Audit-Id: 40bb2202-cbbd-403f-9d0e-3e0d7d51787c
	I0318 13:10:25.491072    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:25.491072    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:25.491072    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:25.491312    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:25.491991    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:25.491991    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:25.491991    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:25.491991    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:25.495333    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:25.495333    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:25.495333    2404 round_trippers.go:580]     Audit-Id: 84eeb348-5eb7-4848-b68d-aba04f17caeb
	I0318 13:10:25.495333    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:25.495532    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:25.495532    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:25.495532    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:25.495532    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:25 GMT
	I0318 13:10:25.495692    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1876","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 13:10:25.983050    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:25.983050    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:25.983050    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:25.983050    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:25.986650    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:25.987571    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:25.987571    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:25 GMT
	I0318 13:10:25.987571    2404 round_trippers.go:580]     Audit-Id: e9e35350-dae8-47e8-bd19-0f6c7143717e
	I0318 13:10:25.987571    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:25.987571    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:25.987571    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:25.987571    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:25.987835    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:25.988470    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:25.988470    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:25.988470    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:25.988470    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:25.992111    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:25.992551    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:25.992551    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:25.992551    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:25 GMT
	I0318 13:10:25.992551    2404 round_trippers.go:580]     Audit-Id: 9d87041a-612e-4137-9d70-5b7114f82886
	I0318 13:10:25.992551    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:25.992551    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:25.992551    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:25.992551    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1876","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 13:10:25.993233    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:26.484566    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:26.484642    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:26.484642    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:26.484642    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:26.488886    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:26.488886    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:26.488886    2404 round_trippers.go:580]     Audit-Id: dc844628-f946-47d0-8da1-6fde0e5ba0a2
	I0318 13:10:26.489250    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:26.489250    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:26.489250    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:26.489250    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:26.489250    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:26 GMT
	I0318 13:10:26.489382    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:26.490218    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:26.490218    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:26.490218    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:26.490328    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:26.494148    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:26.494220    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:26.494220    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:26.494220    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:26.494290    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:26.494290    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:26 GMT
	I0318 13:10:26.494290    2404 round_trippers.go:580]     Audit-Id: b315bac9-1feb-4a08-8c56-9f19f331d953
	I0318 13:10:26.494290    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:26.494520    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1876","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 13:10:26.984324    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:26.984378    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:26.984468    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:26.984468    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:26.988256    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:26.988491    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:26.988587    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:26.988587    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:26.988587    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:26 GMT
	I0318 13:10:26.988587    2404 round_trippers.go:580]     Audit-Id: b1658a5e-d989-4bd6-b840-c172db6d0e35
	I0318 13:10:26.988587    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:26.988587    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:26.988862    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:26.989619    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:26.989679    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:26.989679    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:26.989679    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:26.991911    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:26.992849    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:26.992849    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:26.992849    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:26.992849    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:26.992933    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:26.992950    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:26 GMT
	I0318 13:10:26.992950    2404 round_trippers.go:580]     Audit-Id: 7b42ef74-b5ef-45d9-b4eb-c984c3044972
	I0318 13:10:26.993085    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:27.486095    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:27.486095    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:27.486181    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:27.486181    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:27.492716    2404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:10:27.492716    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:27.492716    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:27.492716    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:27.492716    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:27 GMT
	I0318 13:10:27.492716    2404 round_trippers.go:580]     Audit-Id: 36efddde-f072-46ce-91c2-b09c372854e1
	I0318 13:10:27.492716    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:27.492716    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:27.492716    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:27.493503    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:27.493503    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:27.494052    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:27.494052    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:27.497170    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:27.497170    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:27.497170    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:27.497170    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:27 GMT
	I0318 13:10:27.497170    2404 round_trippers.go:580]     Audit-Id: 955ea648-8889-43d4-a874-6e63af841a31
	I0318 13:10:27.497170    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:27.497170    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:27.497170    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:27.497170    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:27.986450    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:27.986450    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:27.986450    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:27.986450    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:27.991061    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:27.991061    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:27.991061    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:27.991361    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:27.991361    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:27 GMT
	I0318 13:10:27.991361    2404 round_trippers.go:580]     Audit-Id: b13fa394-33f2-40a3-8950-9926f7bc8143
	I0318 13:10:27.991361    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:27.991361    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:27.991636    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:27.992275    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:27.992275    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:27.992275    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:27.992275    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:27.996144    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:27.996144    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:27.996144    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:27.996144    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:27.996144    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:27 GMT
	I0318 13:10:27.996144    2404 round_trippers.go:580]     Audit-Id: 30d39a30-ebbc-47ae-80ae-362552ea7136
	I0318 13:10:27.996144    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:27.996144    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:27.996737    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:27.997551    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:28.485472    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:28.485549    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:28.485549    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:28.485549    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:28.488936    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:28.488936    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:28.488936    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:28.488936    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:28.488936    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:28 GMT
	I0318 13:10:28.488936    2404 round_trippers.go:580]     Audit-Id: 9b753aa9-7829-4098-b7ba-fce041c78ec0
	I0318 13:10:28.489259    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:28.489259    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:28.489462    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:28.489700    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:28.490245    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:28.490245    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:28.490245    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:28.492335    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:28.492335    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:28.493360    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:28.493360    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:28.493360    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:28.493360    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:28 GMT
	I0318 13:10:28.493360    2404 round_trippers.go:580]     Audit-Id: cb7c4b4c-bcf4-4634-9f73-f7d68a6445bc
	I0318 13:10:28.493360    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:28.493631    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:28.986111    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:28.986222    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:28.986222    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:28.986222    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:28.991126    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:28.991649    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:28.991702    2404 round_trippers.go:580]     Audit-Id: e00fea20-0f98-473b-ab76-2d172d13ded9
	I0318 13:10:28.991702    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:28.991702    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:28.991702    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:28.991702    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:28.991702    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:28 GMT
	I0318 13:10:28.991702    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:28.992831    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:28.992916    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:28.992916    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:28.992916    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:28.995279    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:28.996053    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:28.996053    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:28 GMT
	I0318 13:10:28.996053    2404 round_trippers.go:580]     Audit-Id: df6faf8d-042d-48a7-bc75-ab4d43f3545f
	I0318 13:10:28.996053    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:28.996053    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:28.996053    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:28.996136    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:28.996418    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:29.481925    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:29.482144    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:29.482144    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:29.482144    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:29.485740    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:29.485740    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:29.485988    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:29 GMT
	I0318 13:10:29.485988    2404 round_trippers.go:580]     Audit-Id: adda2e37-95a3-470e-ba69-2c16041e02e3
	I0318 13:10:29.485988    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:29.485988    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:29.485988    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:29.485988    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:29.486215    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:29.486888    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:29.486991    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:29.486991    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:29.486991    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:29.490146    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:29.490146    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:29.490146    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:29.490146    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:29.490146    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:29.490146    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:29.490631    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:29 GMT
	I0318 13:10:29.490631    2404 round_trippers.go:580]     Audit-Id: 818b4998-01b4-498e-81b4-d60c8f66314e
	I0318 13:10:29.490874    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:29.980157    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:29.980233    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:29.980233    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:29.980233    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:29.983639    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:29.984629    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:29.984629    2404 round_trippers.go:580]     Audit-Id: e5d48e9c-f06f-4490-9485-95af0a6cd373
	I0318 13:10:29.984629    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:29.984629    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:29.984629    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:29.984629    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:29.984629    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:29 GMT
	I0318 13:10:29.984865    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:29.985686    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:29.985686    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:29.985686    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:29.985686    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:29.988049    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:29.988825    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:29.989025    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:29.989025    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:29.989025    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:29 GMT
	I0318 13:10:29.989116    2404 round_trippers.go:580]     Audit-Id: 3efc423c-7fd3-4c81-937a-437188194784
	I0318 13:10:29.989116    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:29.989116    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:29.989172    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:30.480134    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:30.480371    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:30.480371    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:30.480371    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:30.484729    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:30.484927    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:30.484927    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:30 GMT
	I0318 13:10:30.484927    2404 round_trippers.go:580]     Audit-Id: bf760774-86d0-4465-8188-fbeb61f5f83c
	I0318 13:10:30.484927    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:30.484927    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:30.484927    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:30.484927    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:30.485349    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:30.486129    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:30.486206    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:30.486206    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:30.486206    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:30.489485    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:30.489485    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:30.489683    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:30.489683    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:30.489683    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:30.489683    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:30.489683    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:30 GMT
	I0318 13:10:30.489683    2404 round_trippers.go:580]     Audit-Id: e4a46c67-ca0e-450a-a433-573834ae28ef
	I0318 13:10:30.490112    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:30.490112    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:30.976291    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:30.976291    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:30.976291    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:30.976291    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:30.979896    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:30.979896    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:30.979896    2404 round_trippers.go:580]     Audit-Id: 022dea84-f15d-4408-b781-0dc856df1c22
	I0318 13:10:30.979896    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:30.979896    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:30.979896    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:30.979896    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:30.980977    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:30 GMT
	I0318 13:10:30.980977    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:30.981945    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:30.982065    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:30.982065    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:30.982065    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:30.985262    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:30.985262    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:30.985330    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:30.985330    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:30.985330    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:30.985330    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:30 GMT
	I0318 13:10:30.985330    2404 round_trippers.go:580]     Audit-Id: b7e349b6-a711-4f4d-8ca3-f2ad1a6f98ea
	I0318 13:10:30.985330    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:30.985419    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:31.482463    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:31.482463    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:31.482463    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:31.482784    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:31.488517    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:10:31.488517    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:31.488517    2404 round_trippers.go:580]     Audit-Id: b84f4f87-4c2c-4f42-94ad-6189ebd1f342
	I0318 13:10:31.488517    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:31.488517    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:31.488517    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:31.488517    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:31.488517    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:31 GMT
	I0318 13:10:31.489206    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:31.489903    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:31.489966    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:31.489966    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:31.489966    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:31.492581    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:31.492581    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:31.492581    2404 round_trippers.go:580]     Audit-Id: 66dff697-da3e-4a83-9268-15ccac111ab1
	I0318 13:10:31.492581    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:31.492581    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:31.492581    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:31.492581    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:31.492581    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:31 GMT
	I0318 13:10:31.493265    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:31.985340    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:31.985399    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:31.985399    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:31.985399    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:31.989173    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:31.989173    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:31.989173    2404 round_trippers.go:580]     Audit-Id: f0badea2-2196-41c4-8aa9-b270368504ac
	I0318 13:10:31.989173    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:31.989734    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:31.989734    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:31.989734    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:31.989734    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:31 GMT
	I0318 13:10:31.989936    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:31.990607    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:31.990680    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:31.990680    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:31.990680    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:31.993957    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:31.993957    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:31.993957    2404 round_trippers.go:580]     Audit-Id: 188b75ac-1f3a-43d6-8f83-7e1062bb8012
	I0318 13:10:31.993957    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:31.993957    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:31.993957    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:31.993957    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:31.993957    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:31 GMT
	I0318 13:10:31.994925    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:32.486951    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:32.487069    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:32.487069    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:32.487069    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:32.491821    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:32.491821    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:32.491874    2404 round_trippers.go:580]     Audit-Id: cace9841-57d9-4a9c-92cd-08e35484b6c4
	I0318 13:10:32.491898    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:32.491898    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:32.491898    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:32.491898    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:32.491898    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:32 GMT
	I0318 13:10:32.492128    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:32.492691    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:32.492845    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:32.492845    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:32.492845    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:32.497249    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:32.497249    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:32.497249    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:32 GMT
	I0318 13:10:32.497249    2404 round_trippers.go:580]     Audit-Id: 1634843b-e912-4a6e-b912-1718ccb20092
	I0318 13:10:32.497249    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:32.497249    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:32.497249    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:32.497249    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:32.497788    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:32.497966    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:32.985710    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:32.985788    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:32.985860    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:32.985860    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:32.989499    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:32.990073    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:32.990073    2404 round_trippers.go:580]     Audit-Id: 62c7f7be-a117-4726-a4d7-c0dbb18903c2
	I0318 13:10:32.990073    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:32.990073    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:32.990073    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:32.990073    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:32.990073    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:32 GMT
	I0318 13:10:32.990154    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:32.991130    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:32.991130    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:32.991130    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:32.991130    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:32.994309    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:32.994309    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:32.994697    2404 round_trippers.go:580]     Audit-Id: ab3116cc-818a-4fd0-be71-5f5d3533a649
	I0318 13:10:32.994697    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:32.994697    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:32.994697    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:32.994697    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:32.994697    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:32 GMT
	I0318 13:10:32.995048    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:33.472660    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:33.472735    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:33.472735    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:33.472805    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:33.478531    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:10:33.478531    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:33.478531    2404 round_trippers.go:580]     Audit-Id: a2c726e5-1509-4745-9875-edde56fd2629
	I0318 13:10:33.478531    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:33.478531    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:33.478531    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:33.478531    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:33.478531    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:33 GMT
	I0318 13:10:33.478531    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:33.479249    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:33.479249    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:33.479249    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:33.479249    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:33.485713    2404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:10:33.485713    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:33.485713    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:33.485713    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:33.485713    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:33.485713    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:33.485713    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:33 GMT
	I0318 13:10:33.485713    2404 round_trippers.go:580]     Audit-Id: 65a5867d-36a7-4b11-9e34-c388e1944bf3
	I0318 13:10:33.486414    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:33.974147    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:33.974147    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:33.974147    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:33.974147    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:33.978424    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:33.978424    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:33.978424    2404 round_trippers.go:580]     Audit-Id: c82e36db-de54-4064-9b82-0a6f523528a3
	I0318 13:10:33.978424    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:33.978424    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:33.978424    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:33.978424    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:33.978424    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:33 GMT
	I0318 13:10:33.978697    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:33.979370    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:33.979370    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:33.979370    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:33.979370    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:33.982390    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:33.982390    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:33.982390    2404 round_trippers.go:580]     Audit-Id: 9e581598-ba84-4d0d-b258-433897e01a34
	I0318 13:10:33.982390    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:33.982559    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:33.982559    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:33.982559    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:33.982559    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:33 GMT
	I0318 13:10:33.982773    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:34.473521    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:34.473521    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:34.473521    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:34.473521    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:34.477520    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:34.477520    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:34.477520    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:34.477520    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:34.477611    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:34.477611    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:34 GMT
	I0318 13:10:34.477611    2404 round_trippers.go:580]     Audit-Id: 6f670d38-b757-4647-ba3b-bfa2b39ff2c6
	I0318 13:10:34.477611    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:34.477661    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:34.478712    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:34.478712    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:34.478712    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:34.478712    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:34.481563    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:34.481563    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:34.481842    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:34.481842    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:34 GMT
	I0318 13:10:34.481842    2404 round_trippers.go:580]     Audit-Id: 5652c4f4-f3cc-496f-9275-687adb772f28
	I0318 13:10:34.481842    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:34.481842    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:34.481842    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:34.481979    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:34.974867    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:34.974995    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:34.974995    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:34.974995    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:34.980342    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:10:34.981437    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:34.981551    2404 round_trippers.go:580]     Audit-Id: 59a88a89-e66a-467b-947b-40492ac89e12
	I0318 13:10:34.981551    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:34.981551    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:34.981551    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:34.981551    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:34.981551    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:34 GMT
	I0318 13:10:34.981769    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:34.982535    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:34.982619    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:34.982619    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:34.982619    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:34.984909    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:34.984909    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:34.984909    2404 round_trippers.go:580]     Audit-Id: 089b7c01-1014-4b75-bdfc-91bb2ca95461
	I0318 13:10:34.984909    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:34.984909    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:34.984909    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:34.984909    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:34.984909    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:34 GMT
	I0318 13:10:34.986141    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:34.986596    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:35.479394    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:35.479394    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:35.479394    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:35.479394    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:35.483072    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:35.483072    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:35.483072    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:35.483072    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:35.483072    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:35.483939    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:35 GMT
	I0318 13:10:35.483939    2404 round_trippers.go:580]     Audit-Id: 388f4493-f322-4f70-921c-e314e4b7e41d
	I0318 13:10:35.483939    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:35.484243    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:35.484560    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:35.484560    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:35.484560    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:35.484560    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:35.488209    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:35.488699    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:35.488699    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:35.488761    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:35 GMT
	I0318 13:10:35.488761    2404 round_trippers.go:580]     Audit-Id: 3afb7b9d-d6a4-45a6-9a44-b8b4d95bbccc
	I0318 13:10:35.488761    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:35.488761    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:35.488761    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:35.488761    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:35.981183    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:35.981183    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:35.981183    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:35.981183    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:35.985146    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:35.985146    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:35.986181    2404 round_trippers.go:580]     Audit-Id: b48a2525-b880-4539-96ec-fc4f64bbc024
	I0318 13:10:35.986181    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:35.986210    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:35.986210    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:35.986210    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:35.986210    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:35 GMT
	I0318 13:10:35.986369    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:35.987343    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:35.987343    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:35.987451    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:35.987451    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:35.990552    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:35.990552    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:35.990552    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:35.990552    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:35.990552    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:35.990552    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:35.990552    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:35 GMT
	I0318 13:10:35.990552    2404 round_trippers.go:580]     Audit-Id: f9e4aa2f-ea72-4cf7-a8da-8010b5b01a22
	I0318 13:10:35.990552    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:36.478393    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:36.478464    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:36.478464    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:36.478464    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:36.482282    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:36.482282    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:36.482282    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:36.483023    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:36.483023    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:36.483023    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:36 GMT
	I0318 13:10:36.483023    2404 round_trippers.go:580]     Audit-Id: f731bacd-ce18-4f83-9a42-9991215a912b
	I0318 13:10:36.483023    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:36.483202    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:36.483955    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:36.483955    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:36.483955    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:36.484041    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:36.487021    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:36.487241    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:36.487241    2404 round_trippers.go:580]     Audit-Id: 21337908-e894-4ae3-a36c-3101d291fc50
	I0318 13:10:36.487241    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:36.487241    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:36.487241    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:36.487241    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:36.487333    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:36 GMT
	I0318 13:10:36.487672    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:36.980680    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:36.980680    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:36.980680    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:36.980680    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:36.985304    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:36.985348    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:36.985426    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:36.985426    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:36.985426    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:36.985426    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:36.985426    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:36 GMT
	I0318 13:10:36.985426    2404 round_trippers.go:580]     Audit-Id: 26456e40-c92d-4d3f-81b9-9a2789ea4b89
	I0318 13:10:36.985647    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:36.986465    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:36.986465    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:36.986465    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:36.986556    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:36.990416    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:36.991163    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:36.991244    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:36.991282    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:36 GMT
	I0318 13:10:36.991282    2404 round_trippers.go:580]     Audit-Id: 25592187-1131-420e-bf8e-678166d05c76
	I0318 13:10:36.991282    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:36.991282    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:36.991282    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:36.992536    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:36.993054    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:37.484360    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:37.484360    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:37.484360    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:37.484360    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:37.488962    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:37.489183    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:37.489183    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:37.489183    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:37.489238    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:37.489238    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:37.489238    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:37 GMT
	I0318 13:10:37.489238    2404 round_trippers.go:580]     Audit-Id: f37d7a9a-b1be-405e-b7f9-d8436bf54c63
	I0318 13:10:37.489779    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:37.490640    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:37.490640    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:37.490640    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:37.490640    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:37.493408    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:37.493408    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:37.493408    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:37.493408    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:37.493408    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:37.493408    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:37 GMT
	I0318 13:10:37.493408    2404 round_trippers.go:580]     Audit-Id: e92212cc-2569-4cda-a6aa-88559f11adde
	I0318 13:10:37.493408    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:37.494520    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:37.981483    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:37.981483    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:37.981483    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:37.981483    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:37.984967    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:37.985723    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:37.985723    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:37 GMT
	I0318 13:10:37.985723    2404 round_trippers.go:580]     Audit-Id: a31be817-ad2c-4165-acb8-efef0f2c3742
	I0318 13:10:37.985723    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:37.985723    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:37.985723    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:37.985723    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:37.986009    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:37.986196    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:37.986196    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:37.986196    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:37.986763    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:37.988971    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:37.988971    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:37.989814    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:37.989814    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:37.989814    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:37 GMT
	I0318 13:10:37.989814    2404 round_trippers.go:580]     Audit-Id: f78565e7-f9b9-4083-83d2-3ee887d3e0a7
	I0318 13:10:37.989814    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:37.989814    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:37.990095    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:38.481378    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:38.481471    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:38.481471    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:38.481471    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:38.485539    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:38.485539    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:38.485539    2404 round_trippers.go:580]     Audit-Id: 010b6dac-358f-4fba-8af3-e9badbabb4e4
	I0318 13:10:38.485539    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:38.485539    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:38.485539    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:38.485539    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:38.485539    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:38 GMT
	I0318 13:10:38.486209    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:38.486947    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:38.487090    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:38.487090    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:38.487090    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:38.490278    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:38.490278    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:38.491054    2404 round_trippers.go:580]     Audit-Id: dbab6823-e52d-4362-984b-b728d33af67c
	I0318 13:10:38.491054    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:38.491054    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:38.491054    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:38.491054    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:38.491054    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:38 GMT
	I0318 13:10:38.491054    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:38.983129    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:38.983206    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:38.983276    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:38.983276    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:38.986014    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:38.986014    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:38.986933    2404 round_trippers.go:580]     Audit-Id: 92a900c9-f20c-4f14-a92f-755c600b2b17
	I0318 13:10:38.987036    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:38.987036    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:38.987036    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:38.987036    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:38.987036    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:38 GMT
	I0318 13:10:38.987215    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:38.987935    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:38.987935    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:38.987935    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:38.988102    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:38.991002    2404 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 13:10:38.991069    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:38.991069    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:38.991069    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:38 GMT
	I0318 13:10:38.991069    2404 round_trippers.go:580]     Audit-Id: a1cf8aed-f121-421f-8e73-1f44cfd9fc8e
	I0318 13:10:38.991069    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:38.991069    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:38.991069    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:38.991245    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:39.480236    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:39.480436    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:39.480436    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:39.480436    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:39.485038    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:39.485038    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:39.485038    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:39.485149    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:39.485149    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:39.485149    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:39 GMT
	I0318 13:10:39.485149    2404 round_trippers.go:580]     Audit-Id: 436de0b2-d1ab-4033-bff4-2933f5e21ed8
	I0318 13:10:39.485149    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:39.485333    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:39.486166    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:39.486166    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:39.486166    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:39.486166    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:39.489815    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:39.490421    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:39.490421    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:39.490421    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:39 GMT
	I0318 13:10:39.490421    2404 round_trippers.go:580]     Audit-Id: c28d4f29-5617-4b95-806f-a8c2b9275398
	I0318 13:10:39.490421    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:39.490421    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:39.490421    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:39.490948    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:39.491376    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:39.978579    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:39.978825    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:39.978825    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:39.978825    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:39.981705    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:39.982739    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:39.982739    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:39.982739    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:39 GMT
	I0318 13:10:39.982739    2404 round_trippers.go:580]     Audit-Id: 721e4391-2318-4fe1-9070-ea07d49524e0
	I0318 13:10:39.982739    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:39.982739    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:39.982850    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:39.983453    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:39.984261    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:39.984331    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:39.984331    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:39.984331    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:39.987609    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:39.987609    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:39.987993    2404 round_trippers.go:580]     Audit-Id: 234ae04e-f8e6-434b-af6c-1d0d64779d5a
	I0318 13:10:39.987993    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:39.987993    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:39.987993    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:39.987993    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:39.987993    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:39 GMT
	I0318 13:10:39.988271    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:40.478642    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:40.478642    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:40.478726    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:40.478726    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:40.483119    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:40.483119    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:40.483119    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:40.483119    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:40.483474    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:40 GMT
	I0318 13:10:40.483474    2404 round_trippers.go:580]     Audit-Id: dd43f2da-473b-4843-b835-92a7014ce945
	I0318 13:10:40.483474    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:40.483474    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:40.483474    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:40.484224    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:40.484796    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:40.484796    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:40.484796    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:40.489248    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:40.489307    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:40.489307    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:40 GMT
	I0318 13:10:40.489307    2404 round_trippers.go:580]     Audit-Id: c03ef6db-de90-4f04-94b0-3070e7b4e209
	I0318 13:10:40.489307    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:40.489307    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:40.489307    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:40.489307    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:40.489307    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:40.979483    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:40.979573    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:40.979573    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:40.979573    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:40.983766    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:40.984117    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:40.984117    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:40.984117    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:40.984117    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:40 GMT
	I0318 13:10:40.984117    2404 round_trippers.go:580]     Audit-Id: b802f491-d189-4191-9704-fc5fc255298c
	I0318 13:10:40.984117    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:40.984117    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:40.984117    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:40.985128    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:40.985183    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:40.985183    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:40.985183    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:40.987127    2404 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 13:10:40.988110    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:40.988110    2404 round_trippers.go:580]     Audit-Id: 4cc73082-b471-4ce2-ac48-ef04e1336155
	I0318 13:10:40.988164    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:40.988164    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:40.988164    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:40.988164    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:40.988164    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:40 GMT
	I0318 13:10:40.988164    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:41.479994    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:41.480246    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:41.480246    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:41.480246    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:41.483664    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:41.483664    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:41.483664    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:41.483664    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:41 GMT
	I0318 13:10:41.483664    2404 round_trippers.go:580]     Audit-Id: 92b2fc3a-3119-4a01-9c04-7efb853bd885
	I0318 13:10:41.483664    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:41.483664    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:41.483664    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:41.484681    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:41.485020    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:41.485020    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:41.485020    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:41.485020    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:41.488639    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:41.488639    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:41.488639    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:41.488639    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:41.488639    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:41.489475    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:41.489475    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:41 GMT
	I0318 13:10:41.489475    2404 round_trippers.go:580]     Audit-Id: 6392c288-f3c5-41f3-b1c8-9d0f3a14e2f9
	I0318 13:10:41.489621    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:41.982677    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:41.982755    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:41.982755    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:41.982755    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:41.987446    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:41.987446    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:41.988059    2404 round_trippers.go:580]     Audit-Id: cd3e56d1-5114-4b89-b2b6-7919fb813ef3
	I0318 13:10:41.988059    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:41.988059    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:41.988059    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:41.988059    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:41.988059    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:41 GMT
	I0318 13:10:41.988115    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:41.989161    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:41.989302    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:41.989302    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:41.989302    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:41.992327    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:41.992327    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:41.992327    2404 round_trippers.go:580]     Audit-Id: 35aa62e8-50de-42c0-b238-7ee56c616ee1
	I0318 13:10:41.992327    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:41.992767    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:41.992767    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:41.992767    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:41.992880    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:41 GMT
	I0318 13:10:41.992968    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:41.992968    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:42.476534    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:42.476534    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:42.476534    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:42.476534    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:42.480431    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:42.481100    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:42.481100    2404 round_trippers.go:580]     Audit-Id: 7e766fdf-4966-48ad-94b1-76c8f57f4cc6
	I0318 13:10:42.481100    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:42.481100    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:42.481100    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:42.481100    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:42.481100    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:42 GMT
	I0318 13:10:42.481311    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:42.482063    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:42.482089    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:42.482089    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:42.482089    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:42.485757    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:42.485911    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:42.485911    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:42.485911    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:42.485911    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:42.485911    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:42 GMT
	I0318 13:10:42.485911    2404 round_trippers.go:580]     Audit-Id: a61907bd-32bf-40c7-a570-157cc9b50888
	I0318 13:10:42.485911    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:42.486182    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:42.978589    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:42.978589    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:42.978589    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:42.978589    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:42.983180    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:42.983988    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:42.983988    2404 round_trippers.go:580]     Audit-Id: a79fbab4-0463-484a-a16e-fbb7cf535df9
	I0318 13:10:42.983988    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:42.983988    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:42.983988    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:42.983988    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:42.983988    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:42 GMT
	I0318 13:10:42.984359    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:42.985533    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:42.985533    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:42.985533    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:42.985533    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:42.988773    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:42.988773    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:42.988773    2404 round_trippers.go:580]     Audit-Id: f8b891a6-09b3-41da-a527-502980bde4d8
	I0318 13:10:42.988773    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:42.988773    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:42.988773    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:42.988773    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:42.988773    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:42 GMT
	I0318 13:10:42.989331    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:43.482408    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:43.482494    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:43.482494    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:43.482494    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:43.487256    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:43.487292    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:43.487292    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:43.487292    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:43 GMT
	I0318 13:10:43.487292    2404 round_trippers.go:580]     Audit-Id: f64ccdab-d361-4793-8305-b2eec35799f1
	I0318 13:10:43.487292    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:43.487292    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:43.487292    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:43.487292    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:43.488523    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:43.488523    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:43.488681    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:43.488681    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:43.494191    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:10:43.494217    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:43.494217    2404 round_trippers.go:580]     Audit-Id: ce54be56-b7be-4e13-ae9a-94b6677eded6
	I0318 13:10:43.494217    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:43.494290    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:43.494290    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:43.494361    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:43.494361    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:43 GMT
	I0318 13:10:43.495047    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:43.984003    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:43.984113    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:43.984113    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:43.984113    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:43.989624    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:10:43.989735    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:43.989735    2404 round_trippers.go:580]     Audit-Id: e4c938be-39b2-487e-b251-9a7198c283bf
	I0318 13:10:43.989735    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:43.989735    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:43.989735    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:43.989735    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:43.989735    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:43 GMT
	I0318 13:10:43.989929    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:43.990786    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:43.990786    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:43.990786    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:43.990786    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:43.993093    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:43.993093    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:43.993774    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:43 GMT
	I0318 13:10:43.993774    2404 round_trippers.go:580]     Audit-Id: 8aa7f6c5-9d4d-4d7b-bd5b-432a177f9a05
	I0318 13:10:43.993774    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:43.993881    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:43.993881    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:43.993881    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:43.993956    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:43.994850    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:44.479327    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:44.479327    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:44.479327    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:44.479327    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:44.483062    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:44.483062    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:44.484093    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:44.484093    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:44.484142    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:44 GMT
	I0318 13:10:44.484142    2404 round_trippers.go:580]     Audit-Id: fd9a6c25-80c1-40cb-ac1d-ccbb829d6a19
	I0318 13:10:44.484142    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:44.484142    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:44.484413    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:44.485368    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:44.485368    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:44.485471    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:44.485471    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:44.487912    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:44.487912    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:44.487912    2404 round_trippers.go:580]     Audit-Id: 3cb33a90-e00d-4cf9-ae1c-1629828c6ecd
	I0318 13:10:44.488864    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:44.488864    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:44.488864    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:44.488864    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:44.488864    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:44 GMT
	I0318 13:10:44.489273    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:44.978731    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:44.978731    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:44.978731    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:44.978810    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:44.987286    2404 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 13:10:44.987286    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:44.987286    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:44.987286    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:44.987286    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:44.987286    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:44 GMT
	I0318 13:10:44.987286    2404 round_trippers.go:580]     Audit-Id: d225c022-38d6-4362-af60-a8f9fd56cfd3
	I0318 13:10:44.987286    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:44.987506    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:44.988185    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:44.988185    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:44.988272    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:44.988272    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:44.990440    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:44.990440    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:44.991486    2404 round_trippers.go:580]     Audit-Id: cd99a42b-a806-4b58-98f9-acc1c4cb7d12
	I0318 13:10:44.991486    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:44.991486    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:44.991486    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:44.991486    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:44.991531    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:44 GMT
	I0318 13:10:44.991774    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:45.482292    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:45.482510    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:45.482510    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:45.482510    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:45.486557    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:45.487443    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:45.487443    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:45.487443    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:45 GMT
	I0318 13:10:45.487443    2404 round_trippers.go:580]     Audit-Id: fe2a362d-6db7-4b5b-8fa3-4d7e96bca914
	I0318 13:10:45.487443    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:45.487538    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:45.487538    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:45.487690    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:45.488312    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:45.488312    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:45.488312    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:45.488312    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:45.491902    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:45.491902    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:45.491902    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:45.491902    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:45.492257    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:45 GMT
	I0318 13:10:45.492257    2404 round_trippers.go:580]     Audit-Id: 067bf348-8605-4e13-8d90-1e5893011f6b
	I0318 13:10:45.492257    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:45.492257    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:45.492389    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:45.986705    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:45.986705    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:45.986705    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:45.986705    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:45.991116    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:45.991116    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:45.991116    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:45.991116    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:45.991116    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:45 GMT
	I0318 13:10:45.991116    2404 round_trippers.go:580]     Audit-Id: 7150c9ca-0eae-4421-83e3-83f94b683298
	I0318 13:10:45.991116    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:45.991116    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:45.991116    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:45.992086    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:45.992209    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:45.992209    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:45.992209    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:45.996415    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:45.996415    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:45.996906    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:45.996906    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:45 GMT
	I0318 13:10:45.996906    2404 round_trippers.go:580]     Audit-Id: f60b9948-446e-4afb-b35c-4993e1ddc35d
	I0318 13:10:45.996906    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:45.996906    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:45.996985    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:45.997094    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:45.997943    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:46.484463    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:46.484463    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:46.484463    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:46.484463    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:46.488896    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:46.489467    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:46.489467    2404 round_trippers.go:580]     Audit-Id: f31b74d7-f902-4f5e-af6d-98b2a04652dc
	I0318 13:10:46.489467    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:46.489467    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:46.489467    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:46.489467    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:46.489467    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:46 GMT
	I0318 13:10:46.489744    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:46.490568    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:46.490645    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:46.490645    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:46.490645    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:46.492965    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:46.492965    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:46.492965    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:46.492965    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:46 GMT
	I0318 13:10:46.492965    2404 round_trippers.go:580]     Audit-Id: 7f05d0ee-1d05-4feb-9de9-d5b12a8611fd
	I0318 13:10:46.492965    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:46.492965    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:46.492965    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:46.492965    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:46.986417    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:46.986681    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:46.986681    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:46.986681    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:46.990614    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:46.991427    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:46.991427    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:46 GMT
	I0318 13:10:46.991427    2404 round_trippers.go:580]     Audit-Id: 416f72d6-81cf-402d-b881-d4ede7dddc62
	I0318 13:10:46.991427    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:46.991427    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:46.991427    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:46.991427    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:46.991427    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:46.992299    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:46.992299    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:46.992299    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:46.992299    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:46.994873    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:46.995640    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:46.995640    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:46 GMT
	I0318 13:10:46.995640    2404 round_trippers.go:580]     Audit-Id: dc744201-3f01-4ebe-8975-a08ad8de9d4f
	I0318 13:10:46.995640    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:46.995640    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:46.995640    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:46.995640    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:46.996010    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:47.487198    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:47.487269    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:47.487269    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:47.487269    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:47.491137    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:47.491757    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:47.491757    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:47.491757    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:47.491757    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:47.491757    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:47.491757    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:47 GMT
	I0318 13:10:47.491757    2404 round_trippers.go:580]     Audit-Id: 41aa1644-7a32-4f69-a2d6-bca860ec8cd1
	I0318 13:10:47.491932    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:47.492807    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:47.492807    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:47.492807    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:47.492807    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:47.495928    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:47.495928    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:47.496028    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:47 GMT
	I0318 13:10:47.496028    2404 round_trippers.go:580]     Audit-Id: cca93d4b-a8e2-4db1-9136-065e3d79ea7e
	I0318 13:10:47.496028    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:47.496028    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:47.496028    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:47.496028    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:47.496686    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:47.985416    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:47.985416    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:47.985416    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:47.985416    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:47.989014    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:47.989014    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:47.989856    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:47.989856    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:47.989856    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:47.989856    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:47.989856    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:47 GMT
	I0318 13:10:47.989856    2404 round_trippers.go:580]     Audit-Id: 3bb56cbc-d9b3-4ca7-b7bc-fcd939b2a8c2
	I0318 13:10:47.990124    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:47.990825    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:47.990890    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:47.990890    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:47.990890    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:47.994211    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:47.994211    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:47.994211    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:47.994295    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:47 GMT
	I0318 13:10:47.994295    2404 round_trippers.go:580]     Audit-Id: f17d0033-23c7-4e7a-ae16-b1d1851ff366
	I0318 13:10:47.994295    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:47.994295    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:47.994295    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:47.994537    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:48.481733    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:48.481733    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:48.481733    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:48.481733    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:48.486541    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:48.486541    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:48.486541    2404 round_trippers.go:580]     Audit-Id: b9ba3fb0-eb92-4ab1-bc13-5ddbf697ffaa
	I0318 13:10:48.486541    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:48.486541    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:48.486541    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:48.486541    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:48.486541    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:48 GMT
	I0318 13:10:48.486815    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:48.488055    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:48.488055    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:48.488055    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:48.488055    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:48.490353    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:48.490353    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:48.490353    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:48.490353    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:48.491127    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:48.491127    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:48.491127    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:48 GMT
	I0318 13:10:48.491127    2404 round_trippers.go:580]     Audit-Id: 8481c207-8c43-48d4-a212-612a02e4ff51
	I0318 13:10:48.491323    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:48.492171    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:48.981996    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:48.982438    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:48.982438    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:48.982438    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:48.986900    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:48.987157    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:48.987157    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:48.987157    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:48 GMT
	I0318 13:10:48.987157    2404 round_trippers.go:580]     Audit-Id: 4f166292-bae5-4a0f-b7b5-9aa0d9a00022
	I0318 13:10:48.987157    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:48.987157    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:48.987157    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:48.987475    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:48.988112    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:48.988112    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:48.988112    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:48.988112    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:48.991201    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:48.991201    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:48.991201    2404 round_trippers.go:580]     Audit-Id: f1a42f28-b083-41d5-a960-43df89aea8c5
	I0318 13:10:48.991201    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:48.991512    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:48.991512    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:48.991512    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:48.991512    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:48 GMT
	I0318 13:10:48.991703    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:49.479688    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:49.479688    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:49.479688    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:49.479688    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:49.483372    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:49.483372    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:49.483372    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:49 GMT
	I0318 13:10:49.483372    2404 round_trippers.go:580]     Audit-Id: fdec251d-7a25-4252-b470-f54142626b0f
	I0318 13:10:49.483372    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:49.483585    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:49.483585    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:49.483585    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:49.483885    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:49.484684    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:49.484755    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:49.484755    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:49.484755    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:49.488241    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:49.488276    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:49.488276    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:49.488276    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:49 GMT
	I0318 13:10:49.488276    2404 round_trippers.go:580]     Audit-Id: 7047bc27-1271-45d5-9026-2b455817166e
	I0318 13:10:49.488276    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:49.488276    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:49.488276    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:49.488276    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:49.979207    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:49.979562    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:49.979562    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:49.979562    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:49.983370    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:49.983627    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:49.983627    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:49.983627    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:49.983627    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:49.983627    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:49.983627    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:49 GMT
	I0318 13:10:49.983627    2404 round_trippers.go:580]     Audit-Id: cdb06616-328c-4438-a770-f4ece0abf438
	I0318 13:10:49.983955    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:49.984636    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:49.984686    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:49.984686    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:49.984686    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:49.987381    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:49.987381    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:49.987381    2404 round_trippers.go:580]     Audit-Id: a95a1922-591a-4847-9566-4a410e54c986
	I0318 13:10:49.987381    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:49.987381    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:49.987808    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:49.987808    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:49.987887    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:49 GMT
	I0318 13:10:49.988124    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:50.481174    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:50.481174    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:50.481174    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:50.481174    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:50.487022    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:10:50.487022    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:50.487022    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:50 GMT
	I0318 13:10:50.487022    2404 round_trippers.go:580]     Audit-Id: 9b33bae2-be39-46e5-aa6f-7983f0bee59e
	I0318 13:10:50.487022    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:50.487022    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:50.487022    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:50.487022    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:50.487022    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:50.487766    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:50.487766    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:50.487766    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:50.487766    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:50.491353    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:50.491353    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:50.491353    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:50.491560    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:50.491560    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:50.491592    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:50 GMT
	I0318 13:10:50.491592    2404 round_trippers.go:580]     Audit-Id: a119ba33-c6ac-4c12-acaa-dbdd26db0a9f
	I0318 13:10:50.491592    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:50.491823    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:50.492249    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:50.982977    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:50.982977    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:50.982977    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:50.982977    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:50.988388    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:10:50.988388    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:50.988388    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:50.988388    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:50.988388    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:50.988388    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:50 GMT
	I0318 13:10:50.988388    2404 round_trippers.go:580]     Audit-Id: abba9391-e84b-4b72-a1a2-217bf675347f
	I0318 13:10:50.988388    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:50.988388    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:50.989106    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:50.989106    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:50.989106    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:50.989106    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:50.993132    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:50.993187    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:50.993187    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:50.993187    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:50.993187    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:50.993187    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:50 GMT
	I0318 13:10:50.993187    2404 round_trippers.go:580]     Audit-Id: c7dfc538-81dd-4929-a20d-ee5739ef3070
	I0318 13:10:50.993187    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:50.995138    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:51.482697    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:51.482697    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:51.482697    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:51.482697    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:51.486369    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:51.486369    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:51.486369    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:51.486369    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:51.486369    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:51.486597    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:51.486597    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:51 GMT
	I0318 13:10:51.486597    2404 round_trippers.go:580]     Audit-Id: ea3ecfc4-3bcb-4fbd-95c1-a1d05d5335e9
	I0318 13:10:51.486754    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:51.487607    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:51.487607    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:51.487607    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:51.487607    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:51.490393    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:51.490656    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:51.490656    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:51.490656    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:51.490656    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:51 GMT
	I0318 13:10:51.490656    2404 round_trippers.go:580]     Audit-Id: e02c38b6-f913-4469-8add-2f8b9c4541a4
	I0318 13:10:51.490656    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:51.490656    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:51.490929    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:51.984433    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:51.984649    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:51.984649    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:51.984705    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:51.994543    2404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 13:10:51.994543    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:51.994543    2404 round_trippers.go:580]     Audit-Id: d257e6ef-6f57-4978-b250-2d5661824f1b
	I0318 13:10:51.994543    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:51.994543    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:51.994543    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:51.994543    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:51.994543    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:51 GMT
	I0318 13:10:51.994543    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:51.995566    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:51.995566    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:51.995566    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:51.995566    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:51.999714    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:51.999714    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:51.999714    2404 round_trippers.go:580]     Audit-Id: 05bfa3dc-9542-47a3-84d2-1131755602be
	I0318 13:10:51.999714    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:52.000175    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:52.000175    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:52.000175    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:52.000175    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:52 GMT
	I0318 13:10:52.000441    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:52.484646    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:52.484896    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:52.484896    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:52.484993    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:52.490330    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:10:52.490330    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:52.490330    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:52.490330    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:52.490330    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:52.490330    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:52.490330    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:52 GMT
	I0318 13:10:52.490330    2404 round_trippers.go:580]     Audit-Id: 7f957243-eb7e-430a-b18d-f247c4a5acf2
	I0318 13:10:52.490964    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:52.491555    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:52.491703    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:52.491703    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:52.491703    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:52.495462    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:52.495579    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:52.495579    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:52 GMT
	I0318 13:10:52.495579    2404 round_trippers.go:580]     Audit-Id: af1634a0-d0a8-4dfd-9db1-e9456c653acf
	I0318 13:10:52.495661    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:52.495661    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:52.495661    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:52.495686    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:52.495686    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:52.496355    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:52.986051    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:52.986051    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:52.986104    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:52.986104    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:52.989713    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:52.990347    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:52.990347    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:52.990347    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:52 GMT
	I0318 13:10:52.990347    2404 round_trippers.go:580]     Audit-Id: 89e19e1b-adf5-4196-801e-2148719a02e1
	I0318 13:10:52.990347    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:52.990347    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:52.990347    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:52.990579    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:52.990828    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:52.990828    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:52.990828    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:52.990828    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:52.994740    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:52.994862    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:52.994862    2404 round_trippers.go:580]     Audit-Id: 187062d9-a47b-4785-94b6-93e4be889a5a
	I0318 13:10:52.994862    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:52.994862    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:52.994862    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:52.994862    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:52.994862    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:52 GMT
	I0318 13:10:52.995002    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:53.478323    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:53.478570    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:53.478570    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:53.478570    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:53.482495    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:53.482495    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:53.482495    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:53 GMT
	I0318 13:10:53.482495    2404 round_trippers.go:580]     Audit-Id: 997a7652-bc70-408d-8da8-480a0ffb3358
	I0318 13:10:53.482495    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:53.482495    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:53.482495    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:53.482495    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:53.486573    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:53.487645    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:53.487645    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:53.487645    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:53.487645    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:53.494256    2404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:10:53.494651    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:53.494651    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:53.494651    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:53.494651    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:53.494651    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:53 GMT
	I0318 13:10:53.494651    2404 round_trippers.go:580]     Audit-Id: b442c57d-7fd3-4000-892b-a0c43383a3ec
	I0318 13:10:53.494718    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:53.494997    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:53.988118    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:53.988118    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:53.988118    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:53.988118    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:53.991757    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:53.991757    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:53.991757    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:53.991757    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:53 GMT
	I0318 13:10:53.991757    2404 round_trippers.go:580]     Audit-Id: 698af6ae-8cb7-41e0-88e5-39406633bffd
	I0318 13:10:53.991757    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:53.991757    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:53.991757    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:53.992858    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:53.993523    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:53.993523    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:53.993523    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:53.993523    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:54.000911    2404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 13:10:54.000911    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:54.000911    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:54.000911    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:54.000911    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:54.000911    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:54 GMT
	I0318 13:10:54.000911    2404 round_trippers.go:580]     Audit-Id: b2f577a2-e667-4358-9037-072a26bcaf12
	I0318 13:10:54.000911    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:54.000911    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:54.475579    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:54.475795    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:54.475795    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:54.475795    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:54.479835    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:54.479835    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:54.480169    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:54.480169    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:54.480169    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:54.480169    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:54 GMT
	I0318 13:10:54.480169    2404 round_trippers.go:580]     Audit-Id: 63be72fc-f63d-4f4a-9c12-904bc32a0615
	I0318 13:10:54.480169    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:54.480393    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1913","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6723 chars]
	I0318 13:10:54.480688    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:54.480688    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:54.480688    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:54.480688    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:54.485927    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:10:54.486104    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:54.486104    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:54.486104    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:54.486104    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:54.486104    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:54.486104    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:54 GMT
	I0318 13:10:54.486104    2404 round_trippers.go:580]     Audit-Id: 8de6fc85-d5a2-4c00-83e3-42e9438d5461
	I0318 13:10:54.486306    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:54.976168    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:54.976168    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:54.976168    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:54.976168    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:54.980794    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:54.980794    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:54.980794    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:54.980794    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:54.981481    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:54.981481    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:54 GMT
	I0318 13:10:54.981481    2404 round_trippers.go:580]     Audit-Id: 38f77a3c-9187-4702-97c9-02a4ab42e6cd
	I0318 13:10:54.981481    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:54.981844    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1913","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6723 chars]
	I0318 13:10:54.982707    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:54.982707    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:54.982707    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:54.982832    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:54.989214    2404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:10:54.989214    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:54.989214    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:54.989214    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:54 GMT
	I0318 13:10:54.989362    2404 round_trippers.go:580]     Audit-Id: 923f939a-ad16-4dbd-959c-3ed563ff76b0
	I0318 13:10:54.989362    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:54.989362    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:54.989362    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:54.989497    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:54.990389    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:55.475525    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:55.475637    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.475637    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.475637    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.479151    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:55.479243    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.479243    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.479243    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.479243    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.479243    2404 round_trippers.go:580]     Audit-Id: f0271434-7a47-46b8-8b34-ca55bca9d828
	I0318 13:10:55.479243    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.479243    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.479456    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1918","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I0318 13:10:55.480196    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:55.480196    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.480196    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.480196    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.483516    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:55.483516    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.483516    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.483873    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.483873    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.483873    2404 round_trippers.go:580]     Audit-Id: 27cbd7a5-2661-4105-a797-0cbce9578172
	I0318 13:10:55.483873    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.483873    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.484322    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:55.484631    2404 pod_ready.go:92] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"True"
	I0318 13:10:55.484631    2404 pod_ready.go:81] duration metric: took 31.5125078s for pod "coredns-5dd5756b68-456tm" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:55.484631    2404 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:55.484631    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-894400
	I0318 13:10:55.484631    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.484631    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.484631    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.488596    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:55.488596    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.488596    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.488596    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.488596    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.488701    2404 round_trippers.go:580]     Audit-Id: 035fe736-4b3b-4d1f-9f00-557ffff7b876
	I0318 13:10:55.488701    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.488701    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.488907    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-894400","namespace":"kube-system","uid":"d4c040b9-a604-4a0d-80ee-7436541af60c","resourceVersion":"1841","creationTimestamp":"2024-03-18T13:09:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.130.156:2379","kubernetes.io/config.hash":"743a549b698f93b8586a236f83c90556","kubernetes.io/config.mirror":"743a549b698f93b8586a236f83c90556","kubernetes.io/config.seen":"2024-03-18T13:09:42.924670260Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:09:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5873 chars]
	I0318 13:10:55.488907    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:55.488907    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.488907    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.489450    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.492734    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:55.492944    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.492944    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.492998    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.492998    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.493023    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.493023    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.493023    2404 round_trippers.go:580]     Audit-Id: b3ef88c7-cb0e-47b0-aad6-1f5ab1482be0
	I0318 13:10:55.493203    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:55.493203    2404 pod_ready.go:92] pod "etcd-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 13:10:55.493203    2404 pod_ready.go:81] duration metric: took 8.572ms for pod "etcd-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:55.493203    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:55.493203    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-894400
	I0318 13:10:55.493203    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.493203    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.493203    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.496151    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:55.497236    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.497236    2404 round_trippers.go:580]     Audit-Id: 9665dc58-22f8-40ff-bc3d-9b3f5ec364c3
	I0318 13:10:55.497236    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.497236    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.497275    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.497275    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.497275    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.497482    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-894400","namespace":"kube-system","uid":"46152b8e-0bda-427e-a1ad-c79506b56763","resourceVersion":"1812","creationTimestamp":"2024-03-18T13:09:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.30.130.156:8443","kubernetes.io/config.hash":"6096c2227c4230453f65f86ebdcd0d95","kubernetes.io/config.mirror":"6096c2227c4230453f65f86ebdcd0d95","kubernetes.io/config.seen":"2024-03-18T13:09:42.869643374Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:09:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7409 chars]
	I0318 13:10:55.497856    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:55.497856    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.497856    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.497856    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.500418    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:55.500418    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.500700    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.500700    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.500700    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.500700    2404 round_trippers.go:580]     Audit-Id: 5a8d3715-1101-4bd7-87f4-ac52c03c1af4
	I0318 13:10:55.500700    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.500700    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.500943    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:55.501505    2404 pod_ready.go:92] pod "kube-apiserver-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 13:10:55.501505    2404 pod_ready.go:81] duration metric: took 8.3024ms for pod "kube-apiserver-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:55.501505    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:55.501505    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-894400
	I0318 13:10:55.501505    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.501505    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.501505    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.506127    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:55.506173    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.506173    2404 round_trippers.go:580]     Audit-Id: 54dff443-4397-4b3b-acaa-a35b854ff957
	I0318 13:10:55.506173    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.506173    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.506217    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.506217    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.506217    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.506446    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-894400","namespace":"kube-system","uid":"4ad5fc15-53ba-4ebb-9a63-b8572cd9c834","resourceVersion":"1813","creationTimestamp":"2024-03-18T12:47:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d340aced56ba169ecac1e3ac58ad57fe","kubernetes.io/config.mirror":"d340aced56ba169ecac1e3ac58ad57fe","kubernetes.io/config.seen":"2024-03-18T12:47:20.228444892Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I0318 13:10:55.507248    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:55.507307    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.507307    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.507349    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.510797    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:55.510797    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.510797    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.510797    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.510797    2404 round_trippers.go:580]     Audit-Id: 7ed0993a-1b7d-48dd-a0a8-d499a98730dc
	I0318 13:10:55.510797    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.510797    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.510797    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.511478    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:55.512114    2404 pod_ready.go:92] pod "kube-controller-manager-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 13:10:55.512177    2404 pod_ready.go:81] duration metric: took 10.6717ms for pod "kube-controller-manager-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:55.512210    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-745w9" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:55.512238    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-745w9
	I0318 13:10:55.512238    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.512238    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.512238    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.515601    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:55.515601    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.515601    2404 round_trippers.go:580]     Audit-Id: 33982c52-830a-48f8-b259-ec4be231cafb
	I0318 13:10:55.515601    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.515601    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.515601    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.515601    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.515601    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.515601    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-745w9","generateName":"kube-proxy-","namespace":"kube-system","uid":"d385fe06-f516-440d-b9ed-37c2d4a81050","resourceVersion":"1698","creationTimestamp":"2024-03-18T12:55:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"da714af0-88aa-4fc4-b73d-4de837f06114","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da714af0-88aa-4fc4-b73d-4de837f06114\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5771 chars]
	I0318 13:10:55.516501    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m03
	I0318 13:10:55.516501    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.516501    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.516501    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.519262    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:55.519876    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.519876    2404 round_trippers.go:580]     Audit-Id: 8eb906a0-0bfd-48c7-b6fd-39f5fb7c7362
	I0318 13:10:55.519876    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.519876    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.519876    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.519876    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.519876    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.520009    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m03","uid":"1f8e594e-d4cc-4247-8064-01ac67ea2b15","resourceVersion":"1855","creationTimestamp":"2024-03-18T13:05:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_05_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:05:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4400 chars]
	I0318 13:10:55.520009    2404 pod_ready.go:97] node "multinode-894400-m03" hosting pod "kube-proxy-745w9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400-m03" has status "Ready":"Unknown"
	I0318 13:10:55.520009    2404 pod_ready.go:81] duration metric: took 7.7716ms for pod "kube-proxy-745w9" in "kube-system" namespace to be "Ready" ...
	E0318 13:10:55.520009    2404 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-894400-m03" hosting pod "kube-proxy-745w9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400-m03" has status "Ready":"Unknown"
	I0318 13:10:55.520009    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8bdmn" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:55.677453    2404 request.go:629] Waited for 157.2836ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8bdmn
	I0318 13:10:55.677705    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8bdmn
	I0318 13:10:55.677737    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.677737    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.677737    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.680460    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:55.680460    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.680460    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.680460    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.680460    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.680460    2404 round_trippers.go:580]     Audit-Id: 9814cc38-74d3-4255-8324-fe159f9842aa
	I0318 13:10:55.680460    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.680460    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.681408    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8bdmn","generateName":"kube-proxy-","namespace":"kube-system","uid":"5c266b8a-9665-4365-93c6-2b5f1699d3ef","resourceVersion":"1899","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"da714af0-88aa-4fc4-b73d-4de837f06114","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da714af0-88aa-4fc4-b73d-4de837f06114\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5767 chars]
	I0318 13:10:55.881230    2404 request.go:629] Waited for 198.9733ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:10:55.881442    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:10:55.881442    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.881442    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.881442    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.884293    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:55.884293    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.885170    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.885170    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.885170    2404 round_trippers.go:580]     Audit-Id: ea12ee62-e05f-4e03-a0ce-94fe1273c42f
	I0318 13:10:55.885170    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.885170    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.885170    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.885367    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"1905","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4582 chars]
	I0318 13:10:55.886102    2404 pod_ready.go:97] node "multinode-894400-m02" hosting pod "kube-proxy-8bdmn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400-m02" has status "Ready":"Unknown"
	I0318 13:10:55.886186    2404 pod_ready.go:81] duration metric: took 366.1743ms for pod "kube-proxy-8bdmn" in "kube-system" namespace to be "Ready" ...
	E0318 13:10:55.886186    2404 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-894400-m02" hosting pod "kube-proxy-8bdmn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400-m02" has status "Ready":"Unknown"
	I0318 13:10:55.886186    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mc5tv" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:56.083612    2404 request.go:629] Waited for 197.3426ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mc5tv
	I0318 13:10:56.083973    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mc5tv
	I0318 13:10:56.083973    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:56.083973    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:56.083973    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:56.087825    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:56.087825    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:56.087825    2404 round_trippers.go:580]     Audit-Id: 36f65f48-eb19-4188-88d0-c583c4085517
	I0318 13:10:56.087825    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:56.087825    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:56.088213    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:56.088213    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:56.088213    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:56 GMT
	I0318 13:10:56.088387    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mc5tv","generateName":"kube-proxy-","namespace":"kube-system","uid":"0afe25f8-cbd6-412b-8698-7b547d1d49ca","resourceVersion":"1799","creationTimestamp":"2024-03-18T12:47:41Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"da714af0-88aa-4fc4-b73d-4de837f06114","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da714af0-88aa-4fc4-b73d-4de837f06114\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0318 13:10:56.276477    2404 request.go:629] Waited for 187.7155ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:56.276732    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:56.276732    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:56.276732    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:56.276732    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:56.280198    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:56.280198    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:56.280198    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:56.280198    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:56.280198    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:56.280198    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:56 GMT
	I0318 13:10:56.280198    2404 round_trippers.go:580]     Audit-Id: 8f1f28af-ac20-43b5-9adc-8dacaa1b5a8e
	I0318 13:10:56.280198    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:56.280812    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:56.281520    2404 pod_ready.go:92] pod "kube-proxy-mc5tv" in "kube-system" namespace has status "Ready":"True"
	I0318 13:10:56.281520    2404 pod_ready.go:81] duration metric: took 395.2489ms for pod "kube-proxy-mc5tv" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:56.281520    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:56.479429    2404 request.go:629] Waited for 197.5811ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-894400
	I0318 13:10:56.479761    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-894400
	I0318 13:10:56.479761    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:56.479761    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:56.479761    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:56.484093    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:56.484093    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:56.484093    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:56 GMT
	I0318 13:10:56.484093    2404 round_trippers.go:580]     Audit-Id: e3ec9787-0852-4fcd-b441-fb6669f87e1a
	I0318 13:10:56.484093    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:56.484231    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:56.484231    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:56.484231    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:56.484271    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-894400","namespace":"kube-system","uid":"f47703ce-5a82-466e-ac8e-ef6b8cc07e6c","resourceVersion":"1822","creationTimestamp":"2024-03-18T12:47:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1c745e9b917877b1ff3c90ed02e9a79a","kubernetes.io/config.mirror":"1c745e9b917877b1ff3c90ed02e9a79a","kubernetes.io/config.seen":"2024-03-18T12:47:28.428225123Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I0318 13:10:56.680621    2404 request.go:629] Waited for 195.4882ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:56.680621    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:56.680621    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:56.680621    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:56.680621    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:56.684043    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:56.684043    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:56.684043    2404 round_trippers.go:580]     Audit-Id: 1f703bdc-e807-4c16-8593-c2616586cddc
	I0318 13:10:56.684043    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:56.684043    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:56.684043    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:56.684747    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:56.684747    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:56 GMT
	I0318 13:10:56.684963    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:56.685557    2404 pod_ready.go:92] pod "kube-scheduler-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 13:10:56.685557    2404 pod_ready.go:81] duration metric: took 404.0343ms for pod "kube-scheduler-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:56.685557    2404 pod_ready.go:38] duration metric: took 32.7242792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:10:56.685692    2404 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:10:56.694984    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:10:56.719135    2404 command_runner.go:130] > fc4430c7fa20
	I0318 13:10:56.720281    2404 logs.go:276] 1 containers: [fc4430c7fa20]
	I0318 13:10:56.730238    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:10:56.757930    2404 command_runner.go:130] > 5f0887d1e691
	I0318 13:10:56.758866    2404 logs.go:276] 1 containers: [5f0887d1e691]
	I0318 13:10:56.766876    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:10:56.791210    2404 command_runner.go:130] > 3c3bc988c74c
	I0318 13:10:56.791210    2404 command_runner.go:130] > 693a64f7472f
	I0318 13:10:56.791284    2404 logs.go:276] 2 containers: [3c3bc988c74c 693a64f7472f]
	I0318 13:10:56.800400    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:10:56.824934    2404 command_runner.go:130] > 66ee8be9fada
	I0318 13:10:56.824934    2404 command_runner.go:130] > e4d42739ce0e
	I0318 13:10:56.824934    2404 logs.go:276] 2 containers: [66ee8be9fada e4d42739ce0e]
	I0318 13:10:56.837855    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:10:56.858407    2404 command_runner.go:130] > 163ccabc3882
	I0318 13:10:56.858407    2404 command_runner.go:130] > 9335855aab63
	I0318 13:10:56.858407    2404 logs.go:276] 2 containers: [163ccabc3882 9335855aab63]
	I0318 13:10:56.866409    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:10:56.888433    2404 command_runner.go:130] > 4ad6784a187d
	I0318 13:10:56.888433    2404 command_runner.go:130] > 7aa5cf4ec378
	I0318 13:10:56.888433    2404 logs.go:276] 2 containers: [4ad6784a187d 7aa5cf4ec378]
	I0318 13:10:56.896413    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:10:56.919614    2404 command_runner.go:130] > c8e5ec25e910
	I0318 13:10:56.919614    2404 command_runner.go:130] > c4d7018ad23a
	I0318 13:10:56.919784    2404 logs.go:276] 2 containers: [c8e5ec25e910 c4d7018ad23a]
	I0318 13:10:56.919784    2404 logs.go:123] Gathering logs for etcd [5f0887d1e691] ...
	I0318 13:10:56.919848    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0887d1e691"
	I0318 13:10:56.946988    2404 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T13:09:44.778754Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 13:10:56.947270    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.779618Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.30.130.156:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.30.130.156:2380","--initial-cluster=multinode-894400=https://172.30.130.156:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.30.130.156:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.30.130.156:2380","--name=multinode-894400","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0318 13:10:56.947270    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.780287Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0318 13:10:56.947378    2404 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T13:09:44.780316Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 13:10:56.947378    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.780326Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.30.130.156:2380"]}
	I0318 13:10:56.947460    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.780518Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 13:10:56.947460    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.782775Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.30.130.156:2379"]}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.785511Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-894400","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.30.130.156:2380"],"listen-peer-urls":["https://172.30.130.156:2380"],"advertise-client-urls":["https://172.30.130.156:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.30.130.156:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"init
ial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.809621Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"22.951578ms"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.849189Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.872854Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"2db881e830cc2153","local-member-id":"c2557cd98fa8d31a","commit-index":1981}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.87358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a switched to configuration voters=()"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.873736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became follower at term 2"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.873929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft c2557cd98fa8d31a [peers: [], term: 2, commit: 1981, applied: 0, lastindex: 1981, lastterm: 2]"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T13:09:44.887865Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.892732Z","caller":"mvcc/kvstore.go:323","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1376}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.89955Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1715}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.914592Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.926835Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"c2557cd98fa8d31a","timeout":"7s"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.928545Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"c2557cd98fa8d31a"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.930225Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"c2557cd98fa8d31a","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.930859Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.931762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a switched to configuration voters=(14003235890238378778)"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.932128Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2db881e830cc2153","local-member-id":"c2557cd98fa8d31a","added-peer-id":"c2557cd98fa8d31a","added-peer-peer-urls":["https://172.30.129.141:2380"]}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.933388Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2db881e830cc2153","local-member-id":"c2557cd98fa8d31a","cluster-version":"3.5"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.933717Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0318 13:10:56.948095    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.946226Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0318 13:10:56.948149    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.947818Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0318 13:10:56.948149    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.948803Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0318 13:10:56.948287    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.954567Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 13:10:56.948419    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.954988Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"c2557cd98fa8d31a","initial-advertise-peer-urls":["https://172.30.130.156:2380"],"listen-peer-urls":["https://172.30.130.156:2380"],"advertise-client-urls":["https://172.30.130.156:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.30.130.156:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0318 13:10:56.948515    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.955173Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0318 13:10:56.948515    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.954599Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.30.130.156:2380"}
	I0318 13:10:56.948577    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.956126Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.30.130.156:2380"}
	I0318 13:10:56.948577    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a is starting a new election at term 2"}
	I0318 13:10:56.948632    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became pre-candidate at term 2"}
	I0318 13:10:56.948658    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a received MsgPreVoteResp from c2557cd98fa8d31a at term 2"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became candidate at term 3"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.77574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a received MsgVoteResp from c2557cd98fa8d31a at term 3"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became leader at term 3"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c2557cd98fa8d31a elected leader c2557cd98fa8d31a at term 3"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.782683Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c2557cd98fa8d31a","local-member-attributes":"{Name:multinode-894400 ClientURLs:[https://172.30.130.156:2379]}","request-path":"/0/members/c2557cd98fa8d31a/attributes","cluster-id":"2db881e830cc2153","publish-timeout":"7s"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.78269Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.782706Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.783976Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.30.130.156:2379"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.783993Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.788664Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.788817Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0318 13:10:56.955141    2404 logs.go:123] Gathering logs for kube-proxy [163ccabc3882] ...
	I0318 13:10:56.955141    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 163ccabc3882"
	I0318 13:10:56.977436    2404 command_runner.go:130] ! I0318 13:09:50.786718       1 server_others.go:69] "Using iptables proxy"
	I0318 13:10:56.977584    2404 command_runner.go:130] ! I0318 13:09:50.833991       1 node.go:141] Successfully retrieved node IP: 172.30.130.156
	I0318 13:10:56.977584    2404 command_runner.go:130] ! I0318 13:09:50.913665       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:10:56.977584    2404 command_runner.go:130] ! I0318 13:09:50.913704       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:10:56.977584    2404 command_runner.go:130] ! I0318 13:09:50.924640       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.925588       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.926722       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.926981       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.938764       1 config.go:188] "Starting service config controller"
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.949206       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.949220       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.953299       1 config.go:315] "Starting node config controller"
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.979020       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.990249       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.958488       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.996356       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:51.051947       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:10:56.980209    2404 logs.go:123] Gathering logs for kindnet [c4d7018ad23a] ...
	I0318 13:10:56.980209    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d7018ad23a"
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:20.031595       1 main.go:227] handling current node
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:20.031610       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:20.031618       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:20.031800       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:20.031837       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:30.038705       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:30.038812       1 main.go:227] handling current node
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:30.038826       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:30.038833       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:30.039027       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:30.039347       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:40.051950       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:40.052053       1 main.go:227] handling current node
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:40.052086       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:40.052204       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:40.052568       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:40.052681       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:50.074059       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:50.074164       1 main.go:227] handling current node
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:50.074183       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:50.074192       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:50.075009       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:50.075306       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:00.089286       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:00.089382       1 main.go:227] handling current node
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:00.089397       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:00.089405       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:00.089918       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:00.089934       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:10.103457       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:10.103575       1 main.go:227] handling current node
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:10.103607       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:10.103704       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:10.104106       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:10.104144       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:20.111225       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:20.111346       1 main.go:227] handling current node
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:20.111360       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:20.111367       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:20.111695       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:20.111775       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:30.124283       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:30.124477       1 main.go:227] handling current node
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:30.124495       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:30.124505       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:30.125279       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:30.125393       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:40.137523       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:40.137766       1 main.go:227] handling current node
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:40.137807       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:40.137833       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:40.137998       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:40.138087       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:50.149548       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:50.149697       1 main.go:227] handling current node
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:50.149712       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:50.149720       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:50.150251       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:50.150344       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:00.159094       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:00.159284       1 main.go:227] handling current node
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:00.159340       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:00.159700       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:00.160303       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:00.160346       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:10.177603       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:10.177780       1 main.go:227] handling current node
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:10.178122       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:10.178166       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:10.178455       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:10.178497       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:20.196110       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:20.196144       1 main.go:227] handling current node
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:20.196236       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:20.196542       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:20.196774       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:20.196867       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:30.204485       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:30.204515       1 main.go:227] handling current node
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:30.204528       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:30.204556       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:30.204856       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:30.205022       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:40.221076       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:40.221184       1 main.go:227] handling current node
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:40.221201       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:40.221210       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:40.221741       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:40.221769       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:50.229210       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:50.229302       1 main.go:227] handling current node
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:50.229317       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:50.229324       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:50.229703       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:50.229807       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:00.244905       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:00.244992       1 main.go:227] handling current node
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:00.245007       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:00.245033       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:00.245480       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:00.245600       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:10.253460       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:10.253563       1 main.go:227] handling current node
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:10.253579       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:10.253605       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:10.254199       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:10.254310       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:20.270774       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:20.270870       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:20.270886       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:20.270894       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:20.271275       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:20.271367       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:30.281784       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:30.281809       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:30.281819       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:30.281824       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:30.282361       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:30.282392       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:40.291176       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:40.291304       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:40.291321       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:40.291328       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:40.291827       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:40.291857       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:50.303374       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:50.303454       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:50.303468       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:50.303476       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:50.303974       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:50.304002       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:00.311317       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:00.311423       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:00.311441       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:00.311449       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:00.312039       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:00.312135       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:10.324823       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:10.324902       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:10.324915       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:10.324926       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:10.325084       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:10.325108       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:20.338195       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:20.338297       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:20.338312       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:20.338320       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:20.338525       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:20.338601       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:30.345095       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:30.345184       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:30.345198       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:30.345205       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:30.346074       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:30.346194       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:40.357007       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:40.357386       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:40.357485       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:40.357513       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:40.357737       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:40.357766       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:50.372182       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:50.372221       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:50.372235       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:50.372242       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:50.372608       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:50.372772       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:01:00.386990       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:01:00.387036       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:01:00.387050       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:01:00.387058       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:01:00.387182       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:01:00.387191       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:01:10.396889       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:10.396930       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:10.396942       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:10.396948       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:10.397250       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:10.397343       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:20.413272       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:20.413371       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:20.413386       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:20.413395       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:20.413968       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:20.413999       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:30.429160       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:30.429478       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:30.429549       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:30.429678       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:30.429960       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:30.430034       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:40.436733       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:40.436839       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:40.436901       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:40.436930       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:40.437399       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:40.437431       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:50.451622       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:50.451802       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:50.451849       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:50.451860       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:50.452021       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:50.452171       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:00.460452       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:00.460548       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:00.460563       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:00.460571       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:00.461181       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:00.461333       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:10.474274       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:10.474396       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:10.474427       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:10.474436       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:10.475019       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:10.475159       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:20.489442       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:20.489616       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:20.489699       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:20.489752       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:20.490046       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:20.490082       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:30.497474       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:30.497574       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:30.497589       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:30.497597       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:30.498279       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:30.498361       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:40.512026       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:40.512345       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:40.512385       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:40.512477       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:40.512786       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:40.512873       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:50.520110       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:50.520239       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:50.520254       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:50.520263       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:50.520784       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:50.520861       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:03:00.531866       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:03:00.531958       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:03:00.531972       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:00.531979       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:00.532211       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:00.532293       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:10.543869       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:10.543913       1 main.go:227] handling current node
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:10.543926       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:10.543933       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:10.544294       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:10.544430       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:20.558742       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:20.558782       1 main.go:227] handling current node
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:20.558795       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:20.558802       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:20.558992       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:20.559009       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:30.568771       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:30.568872       1 main.go:227] handling current node
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:30.568905       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:30.568996       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:30.569367       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:30.569450       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:40.587554       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:40.587674       1 main.go:227] handling current node
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:40.588337       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:40.588356       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:40.588758       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:40.588836       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:50.596331       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:50.596438       1 main.go:227] handling current node
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:50.596453       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:50.596462       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:50.596942       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:50.597079       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:00.611242       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:00.611383       1 main.go:227] handling current node
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:00.611397       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:00.611405       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:00.611541       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:00.611572       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:10.624814       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:10.624904       1 main.go:227] handling current node
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:10.624920       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:10.624927       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:10.625504       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:10.625547       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:20.640319       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:20.640364       1 main.go:227] handling current node
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:20.640379       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:20.640386       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:20.640865       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:20.640901       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:30.648021       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:30.648134       1 main.go:227] handling current node
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:30.648148       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:30.648156       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:30.648313       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:30.648344       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:40.663577       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:40.663749       1 main.go:227] handling current node
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:40.663765       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:40.663774       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:40.663896       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:40.663929       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:50.669717       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:04:50.669791       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:04:50.669805       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:04:50.669812       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:04:50.670128       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:04:50.670230       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:00.686596       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:00.686809       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:00.686942       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:00.687116       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:00.687370       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:00.687441       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:10.704297       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:10.704404       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:10.704426       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:10.704555       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:10.704810       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:10.704878       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:20.722958       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:20.723127       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:20.723145       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:20.723159       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:30.731764       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:30.731841       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:30.731854       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:30.731861       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:30.732029       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:30.732163       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:30.732544       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.30.137.140 Flags: [] Table: 0} 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:40.739849       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:40.739939       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:40.739953       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:40.739960       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:40.740081       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:40.740151       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:50.748036       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:50.748465       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:50.748942       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:50.749055       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:50.749287       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:50.749413       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:00.757350       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:00.757434       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:00.757452       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:00.757460       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:00.757853       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:00.758194       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:10.766768       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:10.766886       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:10.766901       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:10.766910       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:10.767143       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:10.767175       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:20.773530       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:20.773656       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:20.773729       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:20.773741       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:20.774155       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:20.774478       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:30.792219       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:30.792349       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:30.792364       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:30.792373       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:30.792864       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:30.792901       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:40.809219       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:40.809451       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:40.809484       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:40.809508       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:06:40.809841       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:06:40.810075       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:06:50.822556       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:06:50.822612       1 main.go:227] handling current node
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:06:50.822667       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:06:50.822680       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:06:50.822925       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:06:50.823171       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:00.837923       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:00.838008       1 main.go:227] handling current node
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:00.838022       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:00.838030       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:00.838429       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:00.838666       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:10.854207       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:10.854411       1 main.go:227] handling current node
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:10.854444       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:10.854469       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:10.854879       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:10.855094       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:20.861534       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:20.861671       1 main.go:227] handling current node
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:20.861685       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:20.861692       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:20.861818       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:20.861845       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.043569    2404 logs.go:123] Gathering logs for Docker ...
	I0318 13:10:57.043569    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:10:57.073951    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:10:57.074760    2404 command_runner.go:130] > Mar 18 13:08:19 minikube cri-dockerd[222]: time="2024-03-18T13:08:19Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:10:57.074875    2404 command_runner.go:130] > Mar 18 13:08:19 minikube cri-dockerd[222]: time="2024-03-18T13:08:19Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:10:57.074875    2404 command_runner.go:130] > Mar 18 13:08:19 minikube cri-dockerd[222]: time="2024-03-18T13:08:19Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 13:10:57.074978    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:10:57.074978    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:10:57.075019    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:10:57.075019    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0318 13:10:57.075105    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 13:10:57.075105    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:10:57.075105    2404 command_runner.go:130] > Mar 18 13:08:22 minikube cri-dockerd[412]: time="2024-03-18T13:08:22Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:10:57.075184    2404 command_runner.go:130] > Mar 18 13:08:22 minikube cri-dockerd[412]: time="2024-03-18T13:08:22Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:10:57.075229    2404 command_runner.go:130] > Mar 18 13:08:22 minikube cri-dockerd[412]: time="2024-03-18T13:08:22Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 13:10:57.075229    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:10:57.075285    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:10:57.075331    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:10:57.075350    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0318 13:10:57.075428    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 13:10:57.075428    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:10:57.075457    2404 command_runner.go:130] > Mar 18 13:08:24 minikube cri-dockerd[434]: time="2024-03-18T13:08:24Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:10:57.075492    2404 command_runner.go:130] > Mar 18 13:08:24 minikube cri-dockerd[434]: time="2024-03-18T13:08:24Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:10:57.075525    2404 command_runner.go:130] > Mar 18 13:08:24 minikube cri-dockerd[434]: time="2024-03-18T13:08:24Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 13:10:57.075542    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:10:57.075542    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:10:57.075542    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:10:57.075542    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0318 13:10:57.075601    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 13:10:57.075624    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0318 13:10:57.075624    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:10:57.075624    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:10:57.075624    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 systemd[1]: Starting Docker Application Container Engine...
	I0318 13:10:57.075684    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[662]: time="2024-03-18T13:09:08.926008208Z" level=info msg="Starting up"
	I0318 13:10:57.075717    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[662]: time="2024-03-18T13:09:08.927042019Z" level=info msg="containerd not running, starting managed containerd"
	I0318 13:10:57.075742    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[662]: time="2024-03-18T13:09:08.928263831Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	I0318 13:10:57.075816    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.958180831Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 13:10:57.075816    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.981644866Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 13:10:57.075874    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.981729667Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 13:10:57.075907    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.981890169Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 13:10:57.075907    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.982007470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.075947    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.982683977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:10:57.075981    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.982866878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.075981    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983040880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:10:57.076052    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983180882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.076108    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983201082Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 13:10:57.076108    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983210682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.076141    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983772288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.076182    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.984603896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.076217    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987157222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:10:57.076272    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987245222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.076272    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987380024Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:10:57.076327    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987459025Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 13:10:57.076389    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.988076231Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 13:10:57.076446    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.988215332Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 13:10:57.076446    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.988231932Z" level=info msg="metadata content store policy set" policy=shared
	I0318 13:10:57.076505    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994386894Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 13:10:57.076505    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994536096Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 13:10:57.076505    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994574296Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 13:10:57.076560    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994587696Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 13:10:57.076560    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994605296Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 13:10:57.076560    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994669597Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 13:10:57.076618    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995239203Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 13:10:57.076691    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995378304Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 13:10:57.076723    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995441205Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 13:10:57.076723    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995564406Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995751508Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995819808Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995841009Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995857509Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995870509Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995903509Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995925809Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995942710Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995963610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995980410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996091811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996121511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996134612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996151212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996165012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996179412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996194912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996291913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996404914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996427114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996445915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996468515Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996497915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996538416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996560016Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997036721Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997287923Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 13:10:57.077280    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997398924Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 13:10:57.077280    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997518125Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 13:10:57.077318    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.998045931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.998612736Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.998643637Z" level=info msg="NRI interface is disabled by configuration."
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999395544Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999606346Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999683147Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999765648Z" level=info msg="containerd successfully booted in 0.044672s"
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:09 multinode-894400 dockerd[662]: time="2024-03-18T13:09:09.982989696Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.138351976Z" level=info msg="Loading containers: start."
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.545129368Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.626119356Z" level=info msg="Loading containers: done."
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.653541890Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.654242899Z" level=info msg="Daemon has completed initialization"
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.702026381Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 systemd[1]: Started Docker Application Container Engine.
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.704980317Z" level=info msg="API listen on [::]:2376"
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 systemd[1]: Stopping Docker Application Container Engine...
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.118112316Z" level=info msg="Processing signal 'terminated'"
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120561724Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120708425Z" level=info msg="Daemon shutdown complete"
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120817525Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120965826Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 systemd[1]: docker.service: Deactivated successfully.
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 systemd[1]: Stopped Docker Application Container Engine.
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 systemd[1]: Starting Docker Application Container Engine...
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:36.188961030Z" level=info msg="Starting up"
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:36.190214934Z" level=info msg="containerd not running, starting managed containerd"
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:36.191301438Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1058
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.220111635Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244480717Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244510717Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 13:10:57.077868    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244539917Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 13:10:57.077868    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244552117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.077939    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244588817Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:10:57.077939    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244601217Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.077939    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244707818Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:10:57.077939    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244791318Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.078055    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244809418Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 13:10:57.078055    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244818018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.078100    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244838218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.078117    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244975219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.078117    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248195830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:10:57.078172    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248302930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248446530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248548631Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248576331Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248593831Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248604331Z" level=info msg="metadata content store policy set" policy=shared
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.249888435Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.249971436Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.250624738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.250745538Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.250859739Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.251093339Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252590644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252685145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252703545Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252722945Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252736845Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252749745Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252793045Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252998846Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253020946Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253065546Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253080846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253090746Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253177146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078738    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253201547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078773    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253215147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078773    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253229847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078773    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253243047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253257847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253270347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253284147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253297547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253313047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253331047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253344647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253357947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253374747Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253395147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253407847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253420947Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253503448Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253519848Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253532848Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253542748Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253613548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253652648Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253668048Z" level=info msg="NRI interface is disabled by configuration."
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254026949Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254474051Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254684152Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254775452Z" level=info msg="containerd successfully booted in 0.035926s"
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.234846559Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.265734263Z" level=info msg="Loading containers: start."
	I0318 13:10:57.079389    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.543045299Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 13:10:57.079389    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.620368360Z" level=info msg="Loading containers: done."
	I0318 13:10:57.079424    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.642056833Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 13:10:57.079424    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.642227734Z" level=info msg="Daemon has completed initialization"
	I0318 13:10:57.079424    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.686175082Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 13:10:57.079424    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 systemd[1]: Started Docker Application Container Engine.
	I0318 13:10:57.079424    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.687135485Z" level=info msg="API listen on [::]:2376"
	I0318 13:10:57.079542    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:10:57.079542    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:10:57.079542    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:10:57.079600    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0318 13:10:57.079622    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Loaded network plugin cni"
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Docker Info: &{ID:5695bce5-a75b-48a7-87b1-d9b6b787473a Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2024-03-18T13:09:38.671342607Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00034fe30 NCPU:2 MemTotal:2216210432 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-894400 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Start cri-dockerd grpc backend"
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:43Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-456tm_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"d001e299e996b74f090c73f4e98605ef0d323a96826d23424e724ca8e7fe466a\""
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:43Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-5b5d89c9d6-c2997_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a23c1189be7c39f22c8a59ba41b198519894c9783ecad8dcda79fb8a76f05254\""
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791205184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791356085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791396985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791577685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838312843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.080188    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838494344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.080243    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838510044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080243    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838727044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080243    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951016023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.080359    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951141424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.080359    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951152624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951369125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/066206d4c52cb784fe7c2001b5e196c6e3521560c412808e8d9ddf742aa008e4/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.020194457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.020684858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.023241167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.023675469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc7236a19957e321c1961c944824f2b4624bd7a289ab4ecefe33a08d4af88e2b/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6fb3325d3c1005ffbbbfe7b136924ed5ff0c71db51f79a50f7179c108c238d47/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/354f3c44a34fcbd9ca7826066f2b4c40b3a6551aa632389815c5d24e85e0472b/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396374926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396436126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396447326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396626927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.467642467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.467879868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.468180469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.468559970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476573097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476618697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476631197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476702797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482324416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.080944    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482501517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.080944    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482648417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081019    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482918618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081019    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:48Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0318 13:10:57.081118    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.545677603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.081118    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.548609313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.081158    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.548646013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081193    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.549168715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081259    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592129660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.081259    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592185160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.081259    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592195760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081386    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592280460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081421    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.615117337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.081448    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.615393238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.081471    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.615610139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081471    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.621669759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081537    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a9f21749669fe37a2d5bee9e7f45bf84eca6ae55870de8ac18bc774db7f81643/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:10:57.081563    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/41035eff3b7db97e56ba25be141bf644acea4ddcb9d92ba3b421e6fa855a09af/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:10:57.081563    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.995795822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.081658    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.995895422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.081701    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.995916522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081701    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.996021523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081787    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/86d74dec812cfd4842de7473d45ff320bfb2283d09d2d05eee29e45cbde9f5e9/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:10:57.081813    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171141514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.081813    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171335814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.081813    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171461415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081916    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171764216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081916    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.391481057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.081916    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.391826158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.082048    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.391990059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082048    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.393600364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082048    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1052]: time="2024-03-18T13:10:20.550892922Z" level=info msg="ignoring event" container=46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0318 13:10:57.082120    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:20.551487227Z" level=info msg="shim disconnected" id=46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264 namespace=moby
	I0318 13:10:57.082197    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:20.551627628Z" level=warning msg="cleaning up after shim disconnected" id=46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264 namespace=moby
	I0318 13:10:57.082252    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:20.551639828Z" level=info msg="cleaning up dead shim" namespace=moby
	I0318 13:10:57.082252    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.200900512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.082252    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.202882722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.082315    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.203198024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.203763327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.250783392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.252016097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.252234698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.252566299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259013124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259187125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259204725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259319625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:10:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/97583cc14f115cf8a4e90889b5f2beda90a81f97fd592e5e5acff8d35e305a59/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:10:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e20878b8092c291820adeb66f1b491dcef85c0699c57800cced7d3530d2a07fb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.818847676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.818997976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.819021476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.819463578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825706506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825766006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825780706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825864707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:56 multinode-894400 dockerd[1052]: 2024/03/18 13:10:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:56 multinode-894400 dockerd[1052]: 2024/03/18 13:10:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:10:57.082943    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:10:57.112261    2404 logs.go:123] Gathering logs for kube-apiserver [fc4430c7fa20] ...
	I0318 13:10:57.112261    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc4430c7fa20"
	I0318 13:10:57.138280    2404 command_runner.go:130] ! I0318 13:09:45.117348       1 options.go:220] external host was not specified, using 172.30.130.156
	I0318 13:10:57.138280    2404 command_runner.go:130] ! I0318 13:09:45.120803       1 server.go:148] Version: v1.28.4
	I0318 13:10:57.139198    2404 command_runner.go:130] ! I0318 13:09:45.120988       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:57.139198    2404 command_runner.go:130] ! I0318 13:09:45.770080       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0318 13:10:57.139198    2404 command_runner.go:130] ! I0318 13:09:45.795010       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0318 13:10:57.139334    2404 command_runner.go:130] ! I0318 13:09:45.795318       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0318 13:10:57.139334    2404 command_runner.go:130] ! I0318 13:09:45.795878       1 instance.go:298] Using reconciler: lease
	I0318 13:10:57.139334    2404 command_runner.go:130] ! I0318 13:09:46.836486       1 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0318 13:10:57.139395    2404 command_runner.go:130] ! W0318 13:09:46.836605       1 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.139395    2404 command_runner.go:130] ! I0318 13:09:47.074638       1 handler.go:232] Adding GroupVersion  v1 to ResourceManager
	I0318 13:10:57.139395    2404 command_runner.go:130] ! I0318 13:09:47.074978       1 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0318 13:10:57.139480    2404 command_runner.go:130] ! I0318 13:09:47.452713       1 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
	I0318 13:10:57.139480    2404 command_runner.go:130] ! I0318 13:09:47.465860       1 handler.go:232] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0318 13:10:57.139480    2404 command_runner.go:130] ! W0318 13:09:47.465973       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.139480    2404 command_runner.go:130] ! W0318 13:09:47.465981       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0318 13:10:57.139555    2404 command_runner.go:130] ! I0318 13:09:47.466706       1 handler.go:232] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.466787       1 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! I0318 13:09:47.467862       1 handler.go:232] Adding GroupVersion autoscaling v2 to ResourceManager
	I0318 13:10:57.139580    2404 command_runner.go:130] ! I0318 13:09:47.468840       1 handler.go:232] Adding GroupVersion autoscaling v1 to ResourceManager
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.468926       1 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.468934       1 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! I0318 13:09:47.470928       1 handler.go:232] Adding GroupVersion batch v1 to ResourceManager
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.471074       1 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! I0318 13:09:47.472121       1 handler.go:232] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.472195       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.472202       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! I0318 13:09:47.472773       1 handler.go:232] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.472852       1 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.472898       1 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! I0318 13:09:47.473727       1 handler.go:232] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0318 13:10:57.139580    2404 command_runner.go:130] ! I0318 13:09:47.476475       1 handler.go:232] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.476612       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.476620       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! I0318 13:09:47.477234       1 handler.go:232] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.477314       1 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.477321       1 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! I0318 13:09:47.478143       1 handler.go:232] Adding GroupVersion policy v1 to ResourceManager
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.478217       1 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
	I0318 13:10:57.140111    2404 command_runner.go:130] ! I0318 13:09:47.480195       1 handler.go:232] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0318 13:10:57.140165    2404 command_runner.go:130] ! W0318 13:09:47.480271       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.140165    2404 command_runner.go:130] ! W0318 13:09:47.480279       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0318 13:10:57.140165    2404 command_runner.go:130] ! I0318 13:09:47.480731       1 handler.go:232] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0318 13:10:57.140165    2404 command_runner.go:130] ! W0318 13:09:47.480812       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.140165    2404 command_runner.go:130] ! W0318 13:09:47.480819       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0318 13:10:57.140165    2404 command_runner.go:130] ! I0318 13:09:47.493837       1 handler.go:232] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0318 13:10:57.140284    2404 command_runner.go:130] ! W0318 13:09:47.494098       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.140284    2404 command_runner.go:130] ! W0318 13:09:47.494198       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0318 13:10:57.140284    2404 command_runner.go:130] ! I0318 13:09:47.499689       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0318 13:10:57.140341    2404 command_runner.go:130] ! I0318 13:09:47.506631       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
	I0318 13:10:57.140359    2404 command_runner.go:130] ! W0318 13:09:47.506664       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.140359    2404 command_runner.go:130] ! W0318 13:09:47.506671       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0318 13:10:57.140359    2404 command_runner.go:130] ! I0318 13:09:47.512288       1 handler.go:232] Adding GroupVersion apps v1 to ResourceManager
	I0318 13:10:57.140419    2404 command_runner.go:130] ! W0318 13:09:47.512371       1 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
	I0318 13:10:57.140442    2404 command_runner.go:130] ! W0318 13:09:47.512378       1 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:47.513443       1 handler.go:232] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0318 13:10:57.140470    2404 command_runner.go:130] ! W0318 13:09:47.513547       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.140470    2404 command_runner.go:130] ! W0318 13:09:47.513557       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:47.514339       1 handler.go:232] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0318 13:10:57.140470    2404 command_runner.go:130] ! W0318 13:09:47.514435       1 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:47.536002       1 handler.go:232] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0318 13:10:57.140470    2404 command_runner.go:130] ! W0318 13:09:47.536061       1 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.221475       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.221960       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.222438       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.222942       1 secure_serving.go:213] Serving securely on [::]:8443
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.223022       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.223440       1 controller.go:78] Starting OpenAPI AggregationController
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.224862       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.225271       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.225417       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.225564       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.228940       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.229462       1 controller.go:116] Starting legacy_token_tracking_controller
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.229644       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.230522       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.230832       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.231097       1 aggregator.go:164] waiting for initial CRD sync...
	I0318 13:10:57.141001    2404 command_runner.go:130] ! I0318 13:09:48.231395       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0318 13:10:57.141001    2404 command_runner.go:130] ! I0318 13:09:48.231642       1 available_controller.go:423] Starting AvailableConditionController
	I0318 13:10:57.141057    2404 command_runner.go:130] ! I0318 13:09:48.231846       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0318 13:10:57.141057    2404 command_runner.go:130] ! I0318 13:09:48.232024       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0318 13:10:57.141057    2404 command_runner.go:130] ! I0318 13:09:48.232223       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0318 13:10:57.141168    2404 command_runner.go:130] ! I0318 13:09:48.232638       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0318 13:10:57.141168    2404 command_runner.go:130] ! I0318 13:09:48.233228       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:10:57.141168    2404 command_runner.go:130] ! I0318 13:09:48.233501       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:10:57.141254    2404 command_runner.go:130] ! I0318 13:09:48.242598       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0318 13:10:57.141288    2404 command_runner.go:130] ! I0318 13:09:48.242850       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0318 13:10:57.141288    2404 command_runner.go:130] ! I0318 13:09:48.243085       1 controller.go:134] Starting OpenAPI controller
	I0318 13:10:57.141288    2404 command_runner.go:130] ! I0318 13:09:48.243289       1 controller.go:85] Starting OpenAPI V3 controller
	I0318 13:10:57.141288    2404 command_runner.go:130] ! I0318 13:09:48.243558       1 naming_controller.go:291] Starting NamingConditionController
	I0318 13:10:57.141346    2404 command_runner.go:130] ! I0318 13:09:48.243852       1 establishing_controller.go:76] Starting EstablishingController
	I0318 13:10:57.141369    2404 command_runner.go:130] ! I0318 13:09:48.244899       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.245178       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.245796       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.231958       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.403749       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.426183       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.426213       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.426382       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.432175       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.433073       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.433297       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.444484       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.444708       1 aggregator.go:166] initial CRD sync complete...
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.444961       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.445263       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.446443       1 cache.go:39] Caches are synced for autoregister controller
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.471536       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:49.257477       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 13:10:57.141396    2404 command_runner.go:130] ! W0318 13:09:49.806994       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.30.130.156]
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:49.809655       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:49.821460       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:51.622752       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:51.799195       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:51.812022       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:51.930541       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:51.942099       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 13:10:57.147976    2404 logs.go:123] Gathering logs for kube-scheduler [66ee8be9fada] ...
	I0318 13:10:57.148047    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ee8be9fada"
	I0318 13:10:57.171495    2404 command_runner.go:130] ! I0318 13:09:45.699415       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:10:57.171495    2404 command_runner.go:130] ! W0318 13:09:48.342100       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 13:10:57.171495    2404 command_runner.go:130] ! W0318 13:09:48.342243       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:10:57.171495    2404 command_runner.go:130] ! W0318 13:09:48.342324       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 13:10:57.171495    2404 command_runner.go:130] ! W0318 13:09:48.342374       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:10:57.171495    2404 command_runner.go:130] ! I0318 13:09:48.402495       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 13:10:57.171495    2404 command_runner.go:130] ! I0318 13:09:48.402540       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:57.171495    2404 command_runner.go:130] ! I0318 13:09:48.407228       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:10:57.171495    2404 command_runner.go:130] ! I0318 13:09:48.409117       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:10:57.171495    2404 command_runner.go:130] ! I0318 13:09:48.410197       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 13:10:57.171495    2404 command_runner.go:130] ! I0318 13:09:48.410738       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:10:57.171495    2404 command_runner.go:130] ! I0318 13:09:48.510577       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:10:57.173502    2404 logs.go:123] Gathering logs for kindnet [c8e5ec25e910] ...
	I0318 13:10:57.173502    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e5ec25e910"
	I0318 13:10:57.204310    2404 command_runner.go:130] ! I0318 13:09:50.858529       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0318 13:10:57.204671    2404 command_runner.go:130] ! I0318 13:09:50.859271       1 main.go:107] hostIP = 172.30.130.156
	I0318 13:10:57.204671    2404 command_runner.go:130] ! podIP = 172.30.130.156
	I0318 13:10:57.204671    2404 command_runner.go:130] ! I0318 13:09:50.860380       1 main.go:116] setting mtu 1500 for CNI 
	I0318 13:10:57.204671    2404 command_runner.go:130] ! I0318 13:09:50.930132       1 main.go:146] kindnetd IP family: "ipv4"
	I0318 13:10:57.204671    2404 command_runner.go:130] ! I0318 13:09:50.933463       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0318 13:10:57.204671    2404 command_runner.go:130] ! I0318 13:10:21.283853       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0318 13:10:57.204767    2404 command_runner.go:130] ! I0318 13:10:21.335833       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:10:57.204767    2404 command_runner.go:130] ! I0318 13:10:21.335942       1 main.go:227] handling current node
	I0318 13:10:57.204818    2404 command_runner.go:130] ! I0318 13:10:21.336264       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.204818    2404 command_runner.go:130] ! I0318 13:10:21.336361       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.204857    2404 command_runner.go:130] ! I0318 13:10:21.336527       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.30.140.66 Flags: [] Table: 0} 
	I0318 13:10:57.204857    2404 command_runner.go:130] ! I0318 13:10:21.336670       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.204857    2404 command_runner.go:130] ! I0318 13:10:21.336680       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.204930    2404 command_runner.go:130] ! I0318 13:10:21.336727       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.30.137.140 Flags: [] Table: 0} 
	I0318 13:10:57.205005    2404 command_runner.go:130] ! I0318 13:10:31.343996       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:10:57.205005    2404 command_runner.go:130] ! I0318 13:10:31.344324       1 main.go:227] handling current node
	I0318 13:10:57.205005    2404 command_runner.go:130] ! I0318 13:10:31.344341       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.205050    2404 command_runner.go:130] ! I0318 13:10:31.344682       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.205050    2404 command_runner.go:130] ! I0318 13:10:31.345062       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.205050    2404 command_runner.go:130] ! I0318 13:10:31.345087       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.205050    2404 command_runner.go:130] ! I0318 13:10:41.357494       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:10:57.205107    2404 command_runner.go:130] ! I0318 13:10:41.357586       1 main.go:227] handling current node
	I0318 13:10:57.205107    2404 command_runner.go:130] ! I0318 13:10:41.357599       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.205107    2404 command_runner.go:130] ! I0318 13:10:41.357606       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.205107    2404 command_runner.go:130] ! I0318 13:10:41.357708       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.205187    2404 command_runner.go:130] ! I0318 13:10:41.357932       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.205187    2404 command_runner.go:130] ! I0318 13:10:51.367560       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:10:57.205187    2404 command_runner.go:130] ! I0318 13:10:51.367661       1 main.go:227] handling current node
	I0318 13:10:57.205187    2404 command_runner.go:130] ! I0318 13:10:51.367675       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.205243    2404 command_runner.go:130] ! I0318 13:10:51.367684       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.205243    2404 command_runner.go:130] ! I0318 13:10:51.367956       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.205243    2404 command_runner.go:130] ! I0318 13:10:51.368281       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.207559    2404 logs.go:123] Gathering logs for container status ...
	I0318 13:10:57.207559    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:10:57.315032    2404 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0318 13:10:57.315032    2404 command_runner.go:130] > c5d2074be239f       8c811b4aec35f                                                                                         4 seconds ago        Running             busybox                   1                   e20878b8092c2       busybox-5b5d89c9d6-c2997
	I0318 13:10:57.315032    2404 command_runner.go:130] > 3c3bc988c74cd       ead0a4a53df89                                                                                         4 seconds ago        Running             coredns                   1                   97583cc14f115       coredns-5dd5756b68-456tm
	I0318 13:10:57.315032    2404 command_runner.go:130] > eadcf41dad509       6e38f40d628db                                                                                         22 seconds ago       Running             storage-provisioner       2                   41035eff3b7db       storage-provisioner
	I0318 13:10:57.315032    2404 command_runner.go:130] > c8e5ec25e910e       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   86d74dec812cf       kindnet-hhsxh
	I0318 13:10:57.315032    2404 command_runner.go:130] > 46c0cf90d385f       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   41035eff3b7db       storage-provisioner
	I0318 13:10:57.315032    2404 command_runner.go:130] > 163ccabc3882a       83f6cc407eed8                                                                                         About a minute ago   Running             kube-proxy                1                   a9f21749669fe       kube-proxy-mc5tv
	I0318 13:10:57.315348    2404 command_runner.go:130] > 5f0887d1e6913       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      0                   354f3c44a34fc       etcd-multinode-894400
	I0318 13:10:57.315348    2404 command_runner.go:130] > 66ee8be9fada7       e3db313c6dbc0                                                                                         About a minute ago   Running             kube-scheduler            1                   6fb3325d3c100       kube-scheduler-multinode-894400
	I0318 13:10:57.315413    2404 command_runner.go:130] > fc4430c7fa204       7fe0e6f37db33                                                                                         About a minute ago   Running             kube-apiserver            0                   bc7236a19957e       kube-apiserver-multinode-894400
	I0318 13:10:57.315472    2404 command_runner.go:130] > 4ad6784a187d6       d058aa5ab969c                                                                                         About a minute ago   Running             kube-controller-manager   1                   066206d4c52cb       kube-controller-manager-multinode-894400
	I0318 13:10:57.315515    2404 command_runner.go:130] > dd031b5cb1e85       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   a23c1189be7c3       busybox-5b5d89c9d6-c2997
	I0318 13:10:57.315548    2404 command_runner.go:130] > 693a64f7472fd       ead0a4a53df89                                                                                         23 minutes ago       Exited              coredns                   0                   d001e299e996b       coredns-5dd5756b68-456tm
	I0318 13:10:57.315548    2404 command_runner.go:130] > c4d7018ad23a7       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              23 minutes ago       Exited              kindnet-cni               0                   a47b1fb60692c       kindnet-hhsxh
	I0318 13:10:57.315602    2404 command_runner.go:130] > 9335855aab63d       83f6cc407eed8                                                                                         23 minutes ago       Exited              kube-proxy                0                   60e9cd749c8f6       kube-proxy-mc5tv
	I0318 13:10:57.315602    2404 command_runner.go:130] > e4d42739ce0e9       e3db313c6dbc0                                                                                         23 minutes ago       Exited              kube-scheduler            0                   82710777e700c       kube-scheduler-multinode-894400
	I0318 13:10:57.315710    2404 command_runner.go:130] > 7aa5cf4ec378e       d058aa5ab969c                                                                                         23 minutes ago       Exited              kube-controller-manager   0                   5485f509825d9       kube-controller-manager-multinode-894400
	I0318 13:10:57.317870    2404 logs.go:123] Gathering logs for coredns [693a64f7472f] ...
	I0318 13:10:57.317915    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 693a64f7472f"
	I0318 13:10:57.351410    2404 command_runner.go:130] > .:53
	I0318 13:10:57.351951    2404 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fa2b120b3007aba3ef70c07d73473d14986404bfc5683cceeb89e8950d5ace894afca7d43365324975f99d1b2da3d6473c722fe82795d39a4ee8b4b969b7aa71
	I0318 13:10:57.351951    2404 command_runner.go:130] > CoreDNS-1.10.1
	I0318 13:10:57.351951    2404 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 13:10:57.352039    2404 command_runner.go:130] > [INFO] 127.0.0.1:33426 - 38858 "HINFO IN 7345450223813584863.4065419873971828575. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030234917s
	I0318 13:10:57.352097    2404 command_runner.go:130] > [INFO] 10.244.1.2:56777 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000311303s
	I0318 13:10:57.352097    2404 command_runner.go:130] > [INFO] 10.244.1.2:58024 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.098073876s
	I0318 13:10:57.352097    2404 command_runner.go:130] > [INFO] 10.244.1.2:57941 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.154978742s
	I0318 13:10:57.352163    2404 command_runner.go:130] > [INFO] 10.244.1.2:42576 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.156414777s
	I0318 13:10:57.352163    2404 command_runner.go:130] > [INFO] 10.244.0.3:43391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152802s
	I0318 13:10:57.352199    2404 command_runner.go:130] > [INFO] 10.244.0.3:52523 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000121101s
	I0318 13:10:57.352199    2404 command_runner.go:130] > [INFO] 10.244.0.3:36187 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000058401s
	I0318 13:10:57.352243    2404 command_runner.go:130] > [INFO] 10.244.0.3:33451 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000055s
	I0318 13:10:57.352243    2404 command_runner.go:130] > [INFO] 10.244.1.2:42180 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097901s
	I0318 13:10:57.352337    2404 command_runner.go:130] > [INFO] 10.244.1.2:60616 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.142731308s
	I0318 13:10:57.352337    2404 command_runner.go:130] > [INFO] 10.244.1.2:45190 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152502s
	I0318 13:10:57.352337    2404 command_runner.go:130] > [INFO] 10.244.1.2:55984 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150102s
	I0318 13:10:57.352337    2404 command_runner.go:130] > [INFO] 10.244.1.2:47725 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037970075s
	I0318 13:10:57.352411    2404 command_runner.go:130] > [INFO] 10.244.1.2:55620 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104901s
	I0318 13:10:57.352411    2404 command_runner.go:130] > [INFO] 10.244.1.2:60349 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000189802s
	I0318 13:10:57.352480    2404 command_runner.go:130] > [INFO] 10.244.1.2:44081 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089501s
	I0318 13:10:57.352480    2404 command_runner.go:130] > [INFO] 10.244.0.3:52580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182502s
	I0318 13:10:57.352480    2404 command_runner.go:130] > [INFO] 10.244.0.3:60982 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000727s
	I0318 13:10:57.352480    2404 command_runner.go:130] > [INFO] 10.244.0.3:53685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081s
	I0318 13:10:57.352480    2404 command_runner.go:130] > [INFO] 10.244.0.3:38117 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127701s
	I0318 13:10:57.352559    2404 command_runner.go:130] > [INFO] 10.244.0.3:38455 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000117101s
	I0318 13:10:57.352559    2404 command_runner.go:130] > [INFO] 10.244.0.3:50629 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121702s
	I0318 13:10:57.352559    2404 command_runner.go:130] > [INFO] 10.244.0.3:33301 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000487s
	I0318 13:10:57.352559    2404 command_runner.go:130] > [INFO] 10.244.0.3:38091 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138402s
	I0318 13:10:57.352559    2404 command_runner.go:130] > [INFO] 10.244.1.2:43364 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192902s
	I0318 13:10:57.352559    2404 command_runner.go:130] > [INFO] 10.244.1.2:42609 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060701s
	I0318 13:10:57.352559    2404 command_runner.go:130] > [INFO] 10.244.1.2:36443 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051301s
	I0318 13:10:57.352660    2404 command_runner.go:130] > [INFO] 10.244.1.2:56414 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000526s
	I0318 13:10:57.352660    2404 command_runner.go:130] > [INFO] 10.244.0.3:50774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137201s
	I0318 13:10:57.352701    2404 command_runner.go:130] > [INFO] 10.244.0.3:43237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000196902s
	I0318 13:10:57.352728    2404 command_runner.go:130] > [INFO] 10.244.0.3:38831 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059901s
	I0318 13:10:57.352770    2404 command_runner.go:130] > [INFO] 10.244.0.3:56163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122801s
	I0318 13:10:57.352770    2404 command_runner.go:130] > [INFO] 10.244.1.2:58305 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209602s
	I0318 13:10:57.352817    2404 command_runner.go:130] > [INFO] 10.244.1.2:58291 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000151202s
	I0318 13:10:57.352839    2404 command_runner.go:130] > [INFO] 10.244.1.2:33227 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184302s
	I0318 13:10:57.352871    2404 command_runner.go:130] > [INFO] 10.244.1.2:58179 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152102s
	I0318 13:10:57.352871    2404 command_runner.go:130] > [INFO] 10.244.0.3:46943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104101s
	I0318 13:10:57.352903    2404 command_runner.go:130] > [INFO] 10.244.0.3:58018 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000107001s
	I0318 13:10:57.352948    2404 command_runner.go:130] > [INFO] 10.244.0.3:35353 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119601s
	I0318 13:10:57.352948    2404 command_runner.go:130] > [INFO] 10.244.0.3:58763 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075701s
	I0318 13:10:57.353040    2404 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0318 13:10:57.353040    2404 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0318 13:10:57.355372    2404 logs.go:123] Gathering logs for coredns [3c3bc988c74c] ...
	I0318 13:10:57.355372    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c3bc988c74c"
	I0318 13:10:57.384552    2404 command_runner.go:130] > .:53
	I0318 13:10:57.384552    2404 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fa2b120b3007aba3ef70c07d73473d14986404bfc5683cceeb89e8950d5ace894afca7d43365324975f99d1b2da3d6473c722fe82795d39a4ee8b4b969b7aa71
	I0318 13:10:57.384552    2404 command_runner.go:130] > CoreDNS-1.10.1
	I0318 13:10:57.384552    2404 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 13:10:57.384552    2404 command_runner.go:130] > [INFO] 127.0.0.1:47251 - 801 "HINFO IN 2968659138506762197.6766024496084331989. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051583557s
	I0318 13:10:57.385564    2404 logs.go:123] Gathering logs for kube-scheduler [e4d42739ce0e] ...
	I0318 13:10:57.385564    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4d42739ce0e"
	I0318 13:10:57.410550    2404 command_runner.go:130] ! I0318 12:47:23.427784       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:10:57.411296    2404 command_runner.go:130] ! W0318 12:47:24.381993       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 13:10:57.411425    2404 command_runner.go:130] ! W0318 12:47:24.382186       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:10:57.411461    2404 command_runner.go:130] ! W0318 12:47:24.382237       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 13:10:57.411521    2404 command_runner.go:130] ! W0318 12:47:24.382251       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:10:57.411773    2404 command_runner.go:130] ! I0318 12:47:24.461225       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 13:10:57.411773    2404 command_runner.go:130] ! I0318 12:47:24.461511       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:57.411773    2404 command_runner.go:130] ! I0318 12:47:24.465946       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 13:10:57.411773    2404 command_runner.go:130] ! I0318 12:47:24.466246       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:10:57.411773    2404 command_runner.go:130] ! I0318 12:47:24.466280       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:10:57.411773    2404 command_runner.go:130] ! I0318 12:47:24.473793       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:10:57.411773    2404 command_runner.go:130] ! W0318 12:47:24.487135       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:10:57.411773    2404 command_runner.go:130] ! E0318 12:47:24.487240       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:10:57.411773    2404 command_runner.go:130] ! W0318 12:47:24.519325       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:10:57.411773    2404 command_runner.go:130] ! E0318 12:47:24.519853       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:10:57.411773    2404 command_runner.go:130] ! W0318 12:47:24.520361       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:10:57.411773    2404 command_runner.go:130] ! E0318 12:47:24.520484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:10:57.411773    2404 command_runner.go:130] ! W0318 12:47:24.520711       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:10:57.411773    2404 command_runner.go:130] ! E0318 12:47:24.522735       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:10:57.412311    2404 command_runner.go:130] ! W0318 12:47:24.523312       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:10:57.412311    2404 command_runner.go:130] ! E0318 12:47:24.523462       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:10:57.412311    2404 command_runner.go:130] ! W0318 12:47:24.523710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:10:57.412570    2404 command_runner.go:130] ! E0318 12:47:24.523900       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:10:57.412570    2404 command_runner.go:130] ! W0318 12:47:24.524226       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.412652    2404 command_runner.go:130] ! E0318 12:47:24.524422       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.412685    2404 command_runner.go:130] ! W0318 12:47:24.524710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:10:57.412685    2404 command_runner.go:130] ! E0318 12:47:24.525125       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:10:57.412731    2404 command_runner.go:130] ! W0318 12:47:24.525523       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.412769    2404 command_runner.go:130] ! E0318 12:47:24.525746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.412833    2404 command_runner.go:130] ! W0318 12:47:24.526240       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:10:57.412866    2404 command_runner.go:130] ! E0318 12:47:24.526443       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:10:57.412866    2404 command_runner.go:130] ! W0318 12:47:24.526703       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! E0318 12:47:24.526852       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! W0318 12:47:24.527382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! E0318 12:47:24.527873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! W0318 12:47:24.528117       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! E0318 12:47:24.528748       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! W0318 12:47:24.529179       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! E0318 12:47:24.529832       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! W0318 12:47:24.530406       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! E0318 12:47:24.532696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! W0318 12:47:25.371082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! E0318 12:47:25.371130       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! W0318 12:47:25.374605       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! E0318 12:47:25.374678       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! W0318 12:47:25.400777       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! E0318 12:47:25.400820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! W0318 12:47:25.434442       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:10:57.413458    2404 command_runner.go:130] ! E0318 12:47:25.434526       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:10:57.413458    2404 command_runner.go:130] ! W0318 12:47:25.456878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:10:57.413458    2404 command_runner.go:130] ! E0318 12:47:25.457121       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:10:57.413678    2404 command_runner.go:130] ! W0318 12:47:25.744652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:10:57.413678    2404 command_runner.go:130] ! E0318 12:47:25.744733       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:10:57.413753    2404 command_runner.go:130] ! W0318 12:47:25.777073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.413836    2404 command_runner.go:130] ! E0318 12:47:25.777145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.413892    2404 command_runner.go:130] ! W0318 12:47:25.850949       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! E0318 12:47:25.850985       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! W0318 12:47:25.876908       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! E0318 12:47:25.877170       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! W0318 12:47:25.892072       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! E0318 12:47:25.892099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! W0318 12:47:25.988864       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! E0318 12:47:25.988912       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! W0318 12:47:26.044749       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! E0318 12:47:26.044834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! W0318 12:47:26.067659       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! E0318 12:47:26.068250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! I0318 12:47:28.178584       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:10:57.413952    2404 command_runner.go:130] ! I0318 13:07:24.107367       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0318 13:10:57.414493    2404 command_runner.go:130] ! I0318 13:07:24.107975       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0318 13:10:57.414493    2404 command_runner.go:130] ! E0318 13:07:24.108193       1 run.go:74] "command failed" err="finished without leader elect"
	I0318 13:10:57.424509    2404 logs.go:123] Gathering logs for kube-proxy [9335855aab63] ...
	I0318 13:10:57.424509    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335855aab63"
	I0318 13:10:57.451123    2404 command_runner.go:130] ! I0318 12:47:42.888603       1 server_others.go:69] "Using iptables proxy"
	I0318 13:10:57.451987    2404 command_runner.go:130] ! I0318 12:47:42.909658       1 node.go:141] Successfully retrieved node IP: 172.30.129.141
	I0318 13:10:57.451987    2404 command_runner.go:130] ! I0318 12:47:42.965774       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:10:57.451987    2404 command_runner.go:130] ! I0318 12:47:42.965824       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:10:57.452077    2404 command_runner.go:130] ! I0318 12:47:42.983172       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:10:57.452149    2404 command_runner.go:130] ! I0318 12:47:42.983221       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:10:57.452149    2404 command_runner.go:130] ! I0318 12:47:42.983471       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:10:57.452149    2404 command_runner.go:130] ! I0318 12:47:42.983484       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:57.452149    2404 command_runner.go:130] ! I0318 12:47:42.987719       1 config.go:188] "Starting service config controller"
	I0318 13:10:57.452149    2404 command_runner.go:130] ! I0318 12:47:42.987733       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:10:57.452245    2404 command_runner.go:130] ! I0318 12:47:42.987775       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:10:57.452245    2404 command_runner.go:130] ! I0318 12:47:42.987781       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:10:57.452519    2404 command_runner.go:130] ! I0318 12:47:42.988298       1 config.go:315] "Starting node config controller"
	I0318 13:10:57.452519    2404 command_runner.go:130] ! I0318 12:47:42.988306       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:10:57.452519    2404 command_runner.go:130] ! I0318 12:47:43.088485       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:10:57.452519    2404 command_runner.go:130] ! I0318 12:47:43.088594       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:10:57.452519    2404 command_runner.go:130] ! I0318 12:47:43.088517       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:10:57.454115    2404 logs.go:123] Gathering logs for kube-controller-manager [4ad6784a187d] ...
	I0318 13:10:57.454115    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ad6784a187d"
	I0318 13:10:57.480580    2404 command_runner.go:130] ! I0318 13:09:46.053304       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:10:57.480580    2404 command_runner.go:130] ! I0318 13:09:46.598188       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:46.598275       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:46.600550       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:46.600856       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:46.601228       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:46.601416       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:50.365580       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:50.380467       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:50.380609       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:50.380622       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:50.396606       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:50.396766       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:50.466364       1 shared_informer.go:318] Caches are synced for tokens
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.425018       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.425185       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.425608       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.425649       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.429368       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.429570       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.429653       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.432615       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.435149       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.435476       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.435957       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.436324       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.436534       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 13:10:57.480642    2404 command_runner.go:130] ! E0318 13:10:00.440226       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.440586       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! E0318 13:10:00.443615       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.443912       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 13:10:57.481205    2404 command_runner.go:130] ! I0318 13:10:00.446716       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 13:10:57.481205    2404 command_runner.go:130] ! I0318 13:10:00.446764       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 13:10:57.481205    2404 command_runner.go:130] ! I0318 13:10:00.447388       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 13:10:57.481282    2404 command_runner.go:130] ! I0318 13:10:00.450136       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 13:10:57.481282    2404 command_runner.go:130] ! I0318 13:10:00.450514       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 13:10:57.481282    2404 command_runner.go:130] ! I0318 13:10:00.450816       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 13:10:57.481353    2404 command_runner.go:130] ! I0318 13:10:00.482128       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 13:10:57.481420    2404 command_runner.go:130] ! I0318 13:10:00.482431       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 13:10:57.481420    2404 command_runner.go:130] ! I0318 13:10:00.482564       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 13:10:57.481420    2404 command_runner.go:130] ! I0318 13:10:00.485138       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 13:10:57.481452    2404 command_runner.go:130] ! I0318 13:10:00.485477       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 13:10:57.481452    2404 command_runner.go:130] ! I0318 13:10:00.485637       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 13:10:57.481452    2404 command_runner.go:130] ! I0318 13:10:00.485765       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 13:10:57.481493    2404 command_runner.go:130] ! I0318 13:10:00.487736       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 13:10:57.481493    2404 command_runner.go:130] ! I0318 13:10:00.488836       1 gc_controller.go:101] "Starting GC controller"
	I0318 13:10:57.481525    2404 command_runner.go:130] ! I0318 13:10:00.489018       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 13:10:57.481525    2404 command_runner.go:130] ! I0318 13:10:00.490586       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 13:10:57.481525    2404 command_runner.go:130] ! I0318 13:10:00.491164       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 13:10:57.481525    2404 command_runner.go:130] ! I0318 13:10:00.491311       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 13:10:57.481576    2404 command_runner.go:130] ! I0318 13:10:00.494562       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 13:10:57.481576    2404 command_runner.go:130] ! I0318 13:10:00.495002       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 13:10:57.481576    2404 command_runner.go:130] ! I0318 13:10:00.495133       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 13:10:57.481627    2404 command_runner.go:130] ! I0318 13:10:00.497694       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 13:10:57.481627    2404 command_runner.go:130] ! I0318 13:10:00.497986       1 expand_controller.go:328] "Starting expand controller"
	I0318 13:10:57.481627    2404 command_runner.go:130] ! I0318 13:10:00.498025       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 13:10:57.481627    2404 command_runner.go:130] ! I0318 13:10:00.500933       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 13:10:57.481682    2404 command_runner.go:130] ! I0318 13:10:00.502880       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 13:10:57.481682    2404 command_runner.go:130] ! I0318 13:10:00.503102       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 13:10:57.481682    2404 command_runner.go:130] ! I0318 13:10:00.506760       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 13:10:57.481682    2404 command_runner.go:130] ! I0318 13:10:00.507227       1 disruption.go:433] "Sending events to api server."
	I0318 13:10:57.481737    2404 command_runner.go:130] ! I0318 13:10:00.507302       1 disruption.go:444] "Starting disruption controller"
	I0318 13:10:57.481737    2404 command_runner.go:130] ! I0318 13:10:00.507366       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 13:10:57.481737    2404 command_runner.go:130] ! I0318 13:10:00.509815       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 13:10:57.481737    2404 command_runner.go:130] ! I0318 13:10:00.510402       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 13:10:57.481793    2404 command_runner.go:130] ! I0318 13:10:00.510478       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 13:10:57.481793    2404 command_runner.go:130] ! I0318 13:10:00.514582       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 13:10:57.481793    2404 command_runner.go:130] ! I0318 13:10:00.514842       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 13:10:57.481843    2404 command_runner.go:130] ! I0318 13:10:00.514832       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:10:57.481843    2404 command_runner.go:130] ! I0318 13:10:00.517859       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 13:10:57.481843    2404 command_runner.go:130] ! I0318 13:10:00.518134       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 13:10:57.481898    2404 command_runner.go:130] ! I0318 13:10:00.518434       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:10:57.481898    2404 command_runner.go:130] ! I0318 13:10:00.519400       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 13:10:57.481898    2404 command_runner.go:130] ! I0318 13:10:00.519576       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 13:10:57.481947    2404 command_runner.go:130] ! I0318 13:10:00.519729       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:10:57.481997    2404 command_runner.go:130] ! I0318 13:10:00.519883       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 13:10:57.481997    2404 command_runner.go:130] ! I0318 13:10:00.519902       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 13:10:57.482047    2404 command_runner.go:130] ! I0318 13:10:00.520909       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 13:10:57.482047    2404 command_runner.go:130] ! I0318 13:10:00.519914       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:10:57.482047    2404 command_runner.go:130] ! I0318 13:10:00.524690       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 13:10:57.482102    2404 command_runner.go:130] ! I0318 13:10:00.524967       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 13:10:57.482102    2404 command_runner.go:130] ! I0318 13:10:00.525267       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 13:10:57.482102    2404 command_runner.go:130] ! I0318 13:10:00.528248       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 13:10:57.482152    2404 command_runner.go:130] ! I0318 13:10:00.528509       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 13:10:57.482152    2404 command_runner.go:130] ! I0318 13:10:00.528721       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 13:10:57.482152    2404 command_runner.go:130] ! I0318 13:10:00.532254       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 13:10:57.482152    2404 command_runner.go:130] ! I0318 13:10:00.532687       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 13:10:57.482206    2404 command_runner.go:130] ! I0318 13:10:00.532717       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 13:10:57.482206    2404 command_runner.go:130] ! I0318 13:10:00.544900       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 13:10:57.482206    2404 command_runner.go:130] ! I0318 13:10:00.545135       1 horizontal.go:200] "Starting HPA controller"
	I0318 13:10:57.482206    2404 command_runner.go:130] ! I0318 13:10:00.545195       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 13:10:57.482206    2404 command_runner.go:130] ! I0318 13:10:00.547641       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 13:10:57.482280    2404 command_runner.go:130] ! I0318 13:10:00.548078       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 13:10:57.482280    2404 command_runner.go:130] ! I0318 13:10:00.550784       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 13:10:57.482280    2404 command_runner.go:130] ! I0318 13:10:00.551368       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 13:10:57.482335    2404 command_runner.go:130] ! I0318 13:10:00.551557       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 13:10:57.482335    2404 command_runner.go:130] ! I0318 13:10:00.551931       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 13:10:57.482335    2404 command_runner.go:130] ! I0318 13:10:00.551452       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 13:10:57.482410    2404 command_runner.go:130] ! I0318 13:10:00.553190       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 13:10:57.482410    2404 command_runner.go:130] ! I0318 13:10:00.553856       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 13:10:57.482410    2404 command_runner.go:130] ! I0318 13:10:00.554970       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 13:10:57.482410    2404 command_runner.go:130] ! I0318 13:10:00.555558       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 13:10:57.482468    2404 command_runner.go:130] ! I0318 13:10:00.555718       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 13:10:57.482468    2404 command_runner.go:130] ! I0318 13:10:00.558545       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 13:10:57.482468    2404 command_runner.go:130] ! I0318 13:10:00.558805       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 13:10:57.482523    2404 command_runner.go:130] ! I0318 13:10:00.558956       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 13:10:57.482523    2404 command_runner.go:130] ! W0318 13:10:00.765746       1 shared_informer.go:593] resyncPeriod 13h51m37.636447347s is smaller than resyncCheckPeriod 22h45m34.293132222s and the informer has already started. Changing it to 22h45m34.293132222s
	I0318 13:10:57.482561    2404 command_runner.go:130] ! I0318 13:10:00.765905       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 13:10:57.482606    2404 command_runner.go:130] ! I0318 13:10:00.766015       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 13:10:57.482606    2404 command_runner.go:130] ! I0318 13:10:00.766141       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 13:10:57.482645    2404 command_runner.go:130] ! I0318 13:10:00.766231       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 13:10:57.482645    2404 command_runner.go:130] ! I0318 13:10:00.767946       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 13:10:57.482689    2404 command_runner.go:130] ! I0318 13:10:00.768138       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 13:10:57.482689    2404 command_runner.go:130] ! I0318 13:10:00.768175       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 13:10:57.482689    2404 command_runner.go:130] ! I0318 13:10:00.768271       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 13:10:57.482753    2404 command_runner.go:130] ! I0318 13:10:00.768411       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 13:10:57.482753    2404 command_runner.go:130] ! I0318 13:10:00.768529       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 13:10:57.482811    2404 command_runner.go:130] ! I0318 13:10:00.768565       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 13:10:57.482834    2404 command_runner.go:130] ! I0318 13:10:00.768633       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! W0318 13:10:00.768841       1 shared_informer.go:593] resyncPeriod 17h39m7.901162259s is smaller than resyncCheckPeriod 22h45m34.293132222s and the informer has already started. Changing it to 22h45m34.293132222s
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769020       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769077       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769115       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769206       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769280       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769427       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769509       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769668       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769816       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769832       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769855       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769714       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.906184       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.906404       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.906702       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.906740       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.956245       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.956457       1 job_controller.go:226] "Starting job controller"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.956765       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:01.056144       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:01.056251       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:01.056576       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:01.156303       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:01.156762       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:01.156852       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:01.205282       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:01.205353       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 13:10:57.483398    2404 command_runner.go:130] ! I0318 13:10:01.205368       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 13:10:57.483398    2404 command_runner.go:130] ! I0318 13:10:01.256513       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 13:10:57.483398    2404 command_runner.go:130] ! I0318 13:10:01.256828       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 13:10:57.483466    2404 command_runner.go:130] ! I0318 13:10:01.256867       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 13:10:57.483466    2404 command_runner.go:130] ! I0318 13:10:01.306581       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 13:10:57.483466    2404 command_runner.go:130] ! I0318 13:10:01.306969       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 13:10:57.483511    2404 command_runner.go:130] ! I0318 13:10:01.307156       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 13:10:57.483511    2404 command_runner.go:130] ! I0318 13:10:01.317298       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:10:57.483511    2404 command_runner.go:130] ! I0318 13:10:01.349149       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 13:10:57.483550    2404 command_runner.go:130] ! I0318 13:10:01.369957       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:10:57.483550    2404 command_runner.go:130] ! I0318 13:10:01.371629       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400\" does not exist"
	I0318 13:10:57.483550    2404 command_runner.go:130] ! I0318 13:10:01.371840       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m02\" does not exist"
	I0318 13:10:57.483648    2404 command_runner.go:130] ! I0318 13:10:01.372556       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.483648    2404 command_runner.go:130] ! I0318 13:10:01.372879       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m03\" does not exist"
	I0318 13:10:57.483648    2404 command_runner.go:130] ! I0318 13:10:01.373004       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.483727    2404 command_runner.go:130] ! I0318 13:10:01.380690       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 13:10:57.483727    2404 command_runner.go:130] ! I0318 13:10:01.383858       1 shared_informer.go:318] Caches are synced for namespace
	I0318 13:10:57.483768    2404 command_runner.go:130] ! I0318 13:10:01.390400       1 shared_informer.go:318] Caches are synced for GC
	I0318 13:10:57.483768    2404 command_runner.go:130] ! I0318 13:10:01.391669       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 13:10:57.483768    2404 command_runner.go:130] ! I0318 13:10:01.398208       1 shared_informer.go:318] Caches are synced for expand
	I0318 13:10:57.483768    2404 command_runner.go:130] ! I0318 13:10:01.403691       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 13:10:57.483829    2404 command_runner.go:130] ! I0318 13:10:01.406154       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 13:10:57.483829    2404 command_runner.go:130] ! I0318 13:10:01.407387       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 13:10:57.483859    2404 command_runner.go:130] ! I0318 13:10:01.407463       1 shared_informer.go:318] Caches are synced for disruption
	I0318 13:10:57.483859    2404 command_runner.go:130] ! I0318 13:10:01.411470       1 shared_informer.go:318] Caches are synced for deployment
	I0318 13:10:57.483859    2404 command_runner.go:130] ! I0318 13:10:01.415591       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 13:10:57.483859    2404 command_runner.go:130] ! I0318 13:10:01.419985       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 13:10:57.483859    2404 command_runner.go:130] ! I0318 13:10:01.420028       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 13:10:57.483859    2404 command_runner.go:130] ! I0318 13:10:01.422567       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 13:10:57.483859    2404 command_runner.go:130] ! I0318 13:10:01.426386       1 shared_informer.go:318] Caches are synced for node
	I0318 13:10:57.483960    2404 command_runner.go:130] ! I0318 13:10:01.426502       1 range_allocator.go:174] "Sending events to api server"
	I0318 13:10:57.483981    2404 command_runner.go:130] ! I0318 13:10:01.426637       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 13:10:57.483981    2404 command_runner.go:130] ! I0318 13:10:01.426705       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 13:10:57.483981    2404 command_runner.go:130] ! I0318 13:10:01.426892       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 13:10:57.483981    2404 command_runner.go:130] ! I0318 13:10:01.426546       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 13:10:57.483981    2404 command_runner.go:130] ! I0318 13:10:01.429986       1 shared_informer.go:318] Caches are synced for service account
	I0318 13:10:57.484046    2404 command_runner.go:130] ! I0318 13:10:01.430014       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 13:10:57.484046    2404 command_runner.go:130] ! I0318 13:10:01.433506       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 13:10:57.484046    2404 command_runner.go:130] ! I0318 13:10:01.437710       1 shared_informer.go:318] Caches are synced for TTL
	I0318 13:10:57.484100    2404 command_runner.go:130] ! I0318 13:10:01.445429       1 shared_informer.go:318] Caches are synced for HPA
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.448863       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.451599       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.454157       1 shared_informer.go:318] Caches are synced for taint
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.454304       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.454496       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.454532       1 taint_manager.go:210] "Sending events to api server"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.455374       1 event.go:307] "Event occurred" object="multinode-894400" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400 event: Registered Node multinode-894400 in Controller"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.455390       1 event.go:307] "Event occurred" object="multinode-894400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.455400       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.456700       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.456719       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.457835       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.457861       1 shared_informer.go:318] Caches are synced for job
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.458132       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.499926       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.502022       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.502582       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m02"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.502665       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m03"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.505439       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.518153       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.524442       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="116.887006ms"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.526447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="62.302µs"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.532190       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="124.57225ms"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.532535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.501µs"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.536870       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.559571       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.576497       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.970420       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:02.008120       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:02.008146       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:23.798396       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:26.538088       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-456tm" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-456tm"
	I0318 13:10:57.484656    2404 command_runner.go:130] ! I0318 13:10:26.538124       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-c2997" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-c2997"
	I0318 13:10:57.484722    2404 command_runner.go:130] ! I0318 13:10:26.538134       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0318 13:10:57.484722    2404 command_runner.go:130] ! I0318 13:10:41.556645       1 event.go:307] "Event occurred" object="multinode-894400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-894400-m02 status is now: NodeNotReady"
	I0318 13:10:57.484722    2404 command_runner.go:130] ! I0318 13:10:41.569274       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8btgf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:10:57.484797    2404 command_runner.go:130] ! I0318 13:10:41.592766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="22.447202ms"
	I0318 13:10:57.484797    2404 command_runner.go:130] ! I0318 13:10:41.593427       1 event.go:307] "Event occurred" object="kube-system/kindnet-k5lpg" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:10:57.484826    2404 command_runner.go:130] ! I0318 13:10:41.595199       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="39.101µs"
	I0318 13:10:57.484862    2404 command_runner.go:130] ! I0318 13:10:41.617007       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-8bdmn" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:10:57.484862    2404 command_runner.go:130] ! I0318 13:10:54.102255       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.438427ms"
	I0318 13:10:57.484862    2404 command_runner.go:130] ! I0318 13:10:54.102713       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="266.302µs"
	I0318 13:10:57.484930    2404 command_runner.go:130] ! I0318 13:10:54.115993       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="210.701µs"
	I0318 13:10:57.484930    2404 command_runner.go:130] ! I0318 13:10:55.131550       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.807636ms"
	I0318 13:10:57.484930    2404 command_runner.go:130] ! I0318 13:10:55.131763       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.301µs"
	I0318 13:10:57.498419    2404 logs.go:123] Gathering logs for kubelet ...
	I0318 13:10:57.498419    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: I0318 13:09:39.912330    1399 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: I0318 13:09:39.913472    1399 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: I0318 13:09:39.914280    1399 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: E0318 13:09:39.914469    1399 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: I0318 13:09:40.661100    1455 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: I0318 13:09:40.661586    1455 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: I0318 13:09:40.662255    1455 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: E0318 13:09:40.662383    1455 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.774439    1532 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.775083    1532 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.775946    1532 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.785429    1532 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.801370    1532 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.849790    1532 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851652    1532 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851916    1532 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851957    1532 topology_manager.go:138] "Creating topology manager with none policy"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851967    1532 container_manager_linux.go:301] "Creating device plugin manager"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.853347    1532 state_mem.go:36] "Initialized new in-memory state store"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.855331    1532 kubelet.go:393] "Attempting to sync node with API server"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.855456    1532 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.856520    1532 kubelet.go:309] "Adding apiserver pod source"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.856554    1532 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.859153    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.859647    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.860993    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.861168    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.872782    1532 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="docker" version="25.0.4" apiVersion="v1"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.875640    1532 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.876823    1532 server.go:1232] "Started kubelet"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.878282    1532 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.879215    1532 server.go:462] "Adding debug handlers to kubelet server"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.882881    1532 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.883660    1532 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.878365    1532 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.886734    1532 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-894400.17bddddee5b23bca", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-894400", UID:"multinode-894400", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-894400"}, FirstTimestamp:time.Date(2024, ti
me.March, 18, 13, 9, 42, 876797898, time.Local), LastTimestamp:time.Date(2024, time.March, 18, 13, 9, 42, 876797898, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-894400"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.30.130.156:8443: connect: connection refused'(may retry after sleeping)
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.886969    1532 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.887086    1532 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.907405    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.907883    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.910785    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="200ms"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.959085    1532 reconciler_new.go:29] "Reconciler: start to sync state"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.981490    1532 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.981531    1532 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.981561    1532 state_mem.go:36] "Initialized new in-memory state store"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.982644    1532 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.982700    1532 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.982728    1532 policy_none.go:49] "None policy: Start"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.989705    1532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.002857    1532 memory_manager.go:169] "Starting memorymanager" policy="None"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.003620    1532 state_mem.go:35] "Initializing new in-memory state store"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.004623    1532 state_mem.go:75] "Updated machine memory state"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.006120    1532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.007397    1532 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.008604    1532 kubelet.go:2303] "Starting kubelet main sync loop"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.008971    1532 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.016115    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.018685    1532 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.021241    1532 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: W0318 13:09:43.022840    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.022916    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.022979    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.023116    1532 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.041923    1532 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-894400\" not found"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.112352    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="400ms"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.113553    1532 topology_manager.go:215] "Topology Admit Handler" podUID="1c745e9b917877b1ff3c90ed02e9a79a" podNamespace="kube-system" podName="kube-scheduler-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.126661    1532 topology_manager.go:215] "Topology Admit Handler" podUID="6096c2227c4230453f65f86ebdcd0d95" podNamespace="kube-system" podName="kube-apiserver-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.137838    1532 topology_manager.go:215] "Topology Admit Handler" podUID="d340aced56ba169ecac1e3ac58ad57fe" podNamespace="kube-system" podName="kube-controller-manager-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.154701    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5485f509825d9272a84959cbcfbb4f0187be886867ba7bac76fa00a35e34bdd1"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.154826    1532 topology_manager.go:215] "Topology Admit Handler" podUID="743a549b698f93b8586a236f83c90556" podNamespace="kube-system" podName="etcd-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171660    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d001e299e996b74f090c73f4e98605ef0d323a96826d23424e724ca8e7fe466a"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171681    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60e9cd749c8f67d0bc24596b26b654cf85a82055f89e14c4a14a4e9342f5fc9f"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171704    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acffce2e73842c3e46177a77ddd5a8d308b51daf062cac439cc487cc863c4226"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171714    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="265b39e386cfa82eef9715aba314fbf8a9292776816cf86ed4099004698cb320"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171723    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="220884cbf1f5b852987c5a28277a4914502f0623413c284054afa92791494c50"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171731    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a47b1fb60692cee0c4ed89ecc511fa046c0873051f7daf026f1c5c6a3dfd7352"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.172283    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82710777e700c4f2e71da911834959efc480f8ba2a526049f0f6c238947c5146"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.186382    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a23c1189be7c39f22c8a59ba41b198519894c9783ecad8dcda79fb8a76f05254"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.231617    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.233479    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.267903    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c745e9b917877b1ff3c90ed02e9a79a-kubeconfig\") pod \"kube-scheduler-multinode-894400\" (UID: \"1c745e9b917877b1ff3c90ed02e9a79a\") " pod="kube-system/kube-scheduler-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268106    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6096c2227c4230453f65f86ebdcd0d95-ca-certs\") pod \"kube-apiserver-multinode-894400\" (UID: \"6096c2227c4230453f65f86ebdcd0d95\") " pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268214    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-ca-certs\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268242    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-kubeconfig\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268269    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268295    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/743a549b698f93b8586a236f83c90556-etcd-certs\") pod \"etcd-multinode-894400\" (UID: \"743a549b698f93b8586a236f83c90556\") " pod="kube-system/etcd-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268330    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/743a549b698f93b8586a236f83c90556-etcd-data\") pod \"etcd-multinode-894400\" (UID: \"743a549b698f93b8586a236f83c90556\") " pod="kube-system/etcd-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268361    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6096c2227c4230453f65f86ebdcd0d95-k8s-certs\") pod \"kube-apiserver-multinode-894400\" (UID: \"6096c2227c4230453f65f86ebdcd0d95\") " pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268423    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6096c2227c4230453f65f86ebdcd0d95-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-894400\" (UID: \"6096c2227c4230453f65f86ebdcd0d95\") " pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268445    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-flexvolume-dir\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268537    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-k8s-certs\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.513563    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="800ms"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.656950    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.658595    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: W0318 13:09:43.917173    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.917511    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: W0318 13:09:44.022640    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.022973    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: W0318 13:09:44.114653    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.114784    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: I0318 13:09:44.229821    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="354f3c44a34fcbd9ca7826066f2b4c40b3a6551aa632389815c5d24e85e0472b"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.315351    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="1.6s"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: W0318 13:09:44.368370    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.368575    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: I0318 13:09:44.495686    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.496847    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:46 multinode-894400 kubelet[1532]: I0318 13:09:46.112867    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.454296    1532 kubelet_node_status.go:108] "Node was previously registered" node="multinode-894400"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.454504    1532 kubelet_node_status.go:73] "Successfully registered node" node="multinode-894400"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.466215    1532 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.467399    1532 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.481710    1532 setters.go:552] "Node became not ready" node="multinode-894400" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-18T13:09:48Z","lastTransitionTime":"2024-03-18T13:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.865400    1532 apiserver.go:52] "Watching apiserver"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872433    1532 topology_manager.go:215] "Topology Admit Handler" podUID="0afe25f8-cbd6-412b-8698-7b547d1d49ca" podNamespace="kube-system" podName="kube-proxy-mc5tv"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872584    1532 topology_manager.go:215] "Topology Admit Handler" podUID="0161d239-2d85-4246-b2fa-6c7374f2ecd6" podNamespace="kube-system" podName="kindnet-hhsxh"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872794    1532 topology_manager.go:215] "Topology Admit Handler" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67" podNamespace="kube-system" podName="coredns-5dd5756b68-456tm"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872862    1532 topology_manager.go:215] "Topology Admit Handler" podUID="219bafbc-d807-44cf-9927-e4957f36ad70" podNamespace="kube-system" podName="storage-provisioner"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872944    1532 topology_manager.go:215] "Topology Admit Handler" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f" podNamespace="default" podName="busybox-5b5d89c9d6-c2997"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.873248    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.873593    1532 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-894400" podUID="62aca0ea-36b0-4841-9616-61448f45e04a"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.873861    1532 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-894400" podUID="672a85d9-7526-4870-a33a-eac509ef3c3f"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.876751    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.889248    1532 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.964782    1532 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.965861    1532 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-894400"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966709    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0afe25f8-cbd6-412b-8698-7b547d1d49ca-lib-modules\") pod \"kube-proxy-mc5tv\" (UID: \"0afe25f8-cbd6-412b-8698-7b547d1d49ca\") " pod="kube-system/kube-proxy-mc5tv"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966761    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/219bafbc-d807-44cf-9927-e4957f36ad70-tmp\") pod \"storage-provisioner\" (UID: \"219bafbc-d807-44cf-9927-e4957f36ad70\") " pod="kube-system/storage-provisioner"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966802    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0161d239-2d85-4246-b2fa-6c7374f2ecd6-cni-cfg\") pod \"kindnet-hhsxh\" (UID: \"0161d239-2d85-4246-b2fa-6c7374f2ecd6\") " pod="kube-system/kindnet-hhsxh"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966847    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0afe25f8-cbd6-412b-8698-7b547d1d49ca-xtables-lock\") pod \"kube-proxy-mc5tv\" (UID: \"0afe25f8-cbd6-412b-8698-7b547d1d49ca\") " pod="kube-system/kube-proxy-mc5tv"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966908    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0161d239-2d85-4246-b2fa-6c7374f2ecd6-xtables-lock\") pod \"kindnet-hhsxh\" (UID: \"0161d239-2d85-4246-b2fa-6c7374f2ecd6\") " pod="kube-system/kindnet-hhsxh"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966943    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0161d239-2d85-4246-b2fa-6c7374f2ecd6-lib-modules\") pod \"kindnet-hhsxh\" (UID: \"0161d239-2d85-4246-b2fa-6c7374f2ecd6\") " pod="kube-system/kindnet-hhsxh"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.968339    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.968477    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:49.468437755 +0000 UTC m=+6.779274091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.000742    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.000961    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.001575    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:49.501554367 +0000 UTC m=+6.812390603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.048369    1532 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c396fd459c503d2e9464c73cc841d3d8" path="/var/lib/kubelet/pods/c396fd459c503d2e9464c73cc841d3d8/volumes"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.051334    1532 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="decc1d942b4d81359bb79c0349ffe9bb" path="/var/lib/kubelet/pods/decc1d942b4d81359bb79c0349ffe9bb/volumes"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.248524    1532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-894400" podStartSLOduration=0.2483832 podCreationTimestamp="2024-03-18 13:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 13:09:49.21292898 +0000 UTC m=+6.523765316" watchObservedRunningTime="2024-03-18 13:09:49.2483832 +0000 UTC m=+6.559219436"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.285710    1532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-894400" podStartSLOduration=0.285684326 podCreationTimestamp="2024-03-18 13:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 13:09:49.252285313 +0000 UTC m=+6.563121649" watchObservedRunningTime="2024-03-18 13:09:49.285684326 +0000 UTC m=+6.596520662"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.471617    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.472236    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:50.471713653 +0000 UTC m=+7.782549889 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.573240    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.573347    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.573459    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:50.573441997 +0000 UTC m=+7.884278233 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.813611    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41035eff3b7db97e56ba25be141bf644acea4ddcb9d92ba3b421e6fa855a09af"
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: I0318 13:09:50.142572    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86d74dec812cfd4842de7473d45ff320bfb2283d09d2d05eee29e45cbde9f5e9"
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: I0318 13:09:50.219092    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9f21749669fe37a2d5bee9e7f45bf84eca6ae55870de8ac18bc774db7f81643"
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.481085    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.481271    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:52.48125246 +0000 UTC m=+9.792088696 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.581790    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.581835    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.581885    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:52.5818703 +0000 UTC m=+9.892706536 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:51 multinode-894400 kubelet[1532]: E0318 13:09:51.011273    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:51 multinode-894400 kubelet[1532]: E0318 13:09:51.012015    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.499973    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.500149    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:56.500131973 +0000 UTC m=+13.810968209 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.601982    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.602006    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.602087    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:56.602073317 +0000 UTC m=+13.912909553 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:53 multinode-894400 kubelet[1532]: E0318 13:09:53.009672    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:53 multinode-894400 kubelet[1532]: E0318 13:09:53.010317    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:55 multinode-894400 kubelet[1532]: E0318 13:09:55.010917    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:55 multinode-894400 kubelet[1532]: E0318 13:09:55.011786    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.539408    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.539534    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:10:04.539515204 +0000 UTC m=+21.850351440 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.639919    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.639948    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.639998    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:10:04.639981843 +0000 UTC m=+21.950818079 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:09:57 multinode-894400 kubelet[1532]: E0318 13:09:57.009521    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:09:57 multinode-894400 kubelet[1532]: E0318 13:09:57.010257    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:09:59 multinode-894400 kubelet[1532]: E0318 13:09:59.011021    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:09:59 multinode-894400 kubelet[1532]: E0318 13:09:59.011361    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:01 multinode-894400 kubelet[1532]: E0318 13:10:01.009167    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:01 multinode-894400 kubelet[1532]: E0318 13:10:01.009678    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:03 multinode-894400 kubelet[1532]: E0318 13:10:03.010168    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:03 multinode-894400 kubelet[1532]: E0318 13:10:03.011736    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.603257    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.603387    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:10:20.60337037 +0000 UTC m=+37.914206606 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.704132    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.704169    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.704219    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:10:20.704204798 +0000 UTC m=+38.015041034 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:05 multinode-894400 kubelet[1532]: E0318 13:10:05.009461    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:05 multinode-894400 kubelet[1532]: E0318 13:10:05.010204    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:07 multinode-894400 kubelet[1532]: E0318 13:10:07.009925    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:07 multinode-894400 kubelet[1532]: E0318 13:10:07.010942    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:09 multinode-894400 kubelet[1532]: E0318 13:10:09.010506    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:09 multinode-894400 kubelet[1532]: E0318 13:10:09.011883    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:11 multinode-894400 kubelet[1532]: E0318 13:10:11.009145    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:11 multinode-894400 kubelet[1532]: E0318 13:10:11.011730    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:13 multinode-894400 kubelet[1532]: E0318 13:10:13.010103    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:13 multinode-894400 kubelet[1532]: E0318 13:10:13.010921    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:15 multinode-894400 kubelet[1532]: E0318 13:10:15.009361    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:15 multinode-894400 kubelet[1532]: E0318 13:10:15.010565    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:17 multinode-894400 kubelet[1532]: E0318 13:10:17.009688    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:17 multinode-894400 kubelet[1532]: E0318 13:10:17.010200    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:19 multinode-894400 kubelet[1532]: E0318 13:10:19.010187    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:19 multinode-894400 kubelet[1532]: E0318 13:10:19.010730    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.639546    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.639747    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:10:52.639723825 +0000 UTC m=+69.950560161 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.740353    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.740517    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.740585    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:10:52.740566824 +0000 UTC m=+70.051403160 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: E0318 13:10:21.010015    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: E0318 13:10:21.011108    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: I0318 13:10:21.647969    1532 scope.go:117] "RemoveContainer" containerID="a2c499223090cc38a7b425469621fb6c8dbc443ab7eb0d5841f1fdcea2922366"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: I0318 13:10:21.651387    1532 scope.go:117] "RemoveContainer" containerID="46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: E0318 13:10:21.652104    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(219bafbc-d807-44cf-9927-e4957f36ad70)\"" pod="kube-system/storage-provisioner" podUID="219bafbc-d807-44cf-9927-e4957f36ad70"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:23 multinode-894400 kubelet[1532]: E0318 13:10:23.010116    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:23 multinode-894400 kubelet[1532]: E0318 13:10:23.010816    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:23 multinode-894400 kubelet[1532]: I0318 13:10:23.777913    1532 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 kubelet[1532]: I0318 13:10:35.009532    1532 scope.go:117] "RemoveContainer" containerID="46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]: I0318 13:10:43.012571    1532 scope.go:117] "RemoveContainer" containerID="56d1819beb10ed198593d8a369f601faf82bf81ff1aecdbffe7114cd1265351b"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]: E0318 13:10:43.030354    1532 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]: I0318 13:10:43.056417    1532 scope.go:117] "RemoveContainer" containerID="c51f768a2f642fdffc6de67f101be5abd8bbaec83ef13011b47efab5aad27134"
	I0318 13:10:57.561419    2404 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:10:57.561419    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:10:57.762906    2404 command_runner.go:130] > Name:               multinode-894400
	I0318 13:10:57.762906    2404 command_runner.go:130] > Roles:              control-plane
	I0318 13:10:57.762906    2404 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     kubernetes.io/hostname=multinode-894400
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     minikube.k8s.io/name=multinode-894400
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_47_29_0700
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0318 13:10:57.762906    2404 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 13:10:57.762906    2404 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:47:24 +0000
	I0318 13:10:57.762906    2404 command_runner.go:130] > Taints:             <none>
	I0318 13:10:57.762906    2404 command_runner.go:130] > Unschedulable:      false
	I0318 13:10:57.762906    2404 command_runner.go:130] > Lease:
	I0318 13:10:57.762906    2404 command_runner.go:130] >   HolderIdentity:  multinode-894400
	I0318 13:10:57.762906    2404 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 13:10:57.762906    2404 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 13:10:49 +0000
	I0318 13:10:57.762906    2404 command_runner.go:130] > Conditions:
	I0318 13:10:57.762906    2404 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0318 13:10:57.762906    2404 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0318 13:10:57.762906    2404 command_runner.go:130] >   MemoryPressure   False   Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0318 13:10:57.762906    2404 command_runner.go:130] >   DiskPressure     False   Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0318 13:10:57.762906    2404 command_runner.go:130] >   PIDPressure      False   Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0318 13:10:57.762906    2404 command_runner.go:130] >   Ready            True    Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 13:10:23 +0000   KubeletReady                 kubelet is posting ready status
	I0318 13:10:57.762906    2404 command_runner.go:130] > Addresses:
	I0318 13:10:57.762906    2404 command_runner.go:130] >   InternalIP:  172.30.130.156
	I0318 13:10:57.762906    2404 command_runner.go:130] >   Hostname:    multinode-894400
	I0318 13:10:57.762906    2404 command_runner.go:130] > Capacity:
	I0318 13:10:57.762906    2404 command_runner.go:130] >   cpu:                2
	I0318 13:10:57.762906    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:10:57.762906    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:10:57.762906    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:10:57.762906    2404 command_runner.go:130] >   pods:               110
	I0318 13:10:57.762906    2404 command_runner.go:130] > Allocatable:
	I0318 13:10:57.762906    2404 command_runner.go:130] >   cpu:                2
	I0318 13:10:57.762906    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:10:57.762906    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:10:57.762906    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:10:57.762906    2404 command_runner.go:130] >   pods:               110
	I0318 13:10:57.762906    2404 command_runner.go:130] > System Info:
	I0318 13:10:57.762906    2404 command_runner.go:130] >   Machine ID:                 80e7b822d2e94d26a09acd4a1bac452b
	I0318 13:10:57.762906    2404 command_runner.go:130] >   System UUID:                5c78c013-e4e8-1041-99c8-95cd760ef34f
	I0318 13:10:57.762906    2404 command_runner.go:130] >   Boot ID:                    a334ae39-1c10-417c-93ad-d28546d7793f
	I0318 13:10:57.763924    2404 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 13:10:57.763924    2404 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 13:10:57.763924    2404 command_runner.go:130] >   Operating System:           linux
	I0318 13:10:57.763924    2404 command_runner.go:130] >   Architecture:               amd64
	I0318 13:10:57.763999    2404 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 13:10:57.763999    2404 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 13:10:57.763999    2404 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 13:10:57.763999    2404 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0318 13:10:57.763999    2404 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0318 13:10:57.763999    2404 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0318 13:10:57.763999    2404 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 13:10:57.763999    2404 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0318 13:10:57.764087    2404 command_runner.go:130] >   default                     busybox-5b5d89c9d6-c2997                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0318 13:10:57.764087    2404 command_runner.go:130] >   kube-system                 coredns-5dd5756b68-456tm                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0318 13:10:57.764087    2404 command_runner.go:130] >   kube-system                 etcd-multinode-894400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         68s
	I0318 13:10:57.764144    2404 command_runner.go:130] >   kube-system                 kindnet-hhsxh                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0318 13:10:57.764144    2404 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-894400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	I0318 13:10:57.764196    2404 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-894400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	I0318 13:10:57.764196    2404 command_runner.go:130] >   kube-system                 kube-proxy-mc5tv                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0318 13:10:57.764196    2404 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-894400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	I0318 13:10:57.764196    2404 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0318 13:10:57.764248    2404 command_runner.go:130] > Allocated resources:
	I0318 13:10:57.764248    2404 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 13:10:57.764248    2404 command_runner.go:130] >   Resource           Requests     Limits
	I0318 13:10:57.764248    2404 command_runner.go:130] >   --------           --------     ------
	I0318 13:10:57.764248    2404 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0318 13:10:57.764301    2404 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0318 13:10:57.764301    2404 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0318 13:10:57.764301    2404 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0318 13:10:57.764301    2404 command_runner.go:130] > Events:
	I0318 13:10:57.764301    2404 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 13:10:57.764354    2404 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 13:10:57.764354    2404 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0318 13:10:57.764402    2404 command_runner.go:130] >   Normal  Starting                 66s                kube-proxy       
	I0318 13:10:57.764402    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	I0318 13:10:57.764402    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	I0318 13:10:57.764453    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	I0318 13:10:57.764453    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0318 13:10:57.764453    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m                kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	I0318 13:10:57.764453    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0318 13:10:57.764520    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m                kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	I0318 13:10:57.764520    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m                kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	I0318 13:10:57.764567    2404 command_runner.go:130] >   Normal  Starting                 23m                kubelet          Starting kubelet.
	I0318 13:10:57.764567    2404 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-894400 event: Registered Node multinode-894400 in Controller
	I0318 13:10:57.764609    2404 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-894400 status is now: NodeReady
	I0318 13:10:57.764609    2404 command_runner.go:130] >   Normal  Starting                 75s                kubelet          Starting kubelet.
	I0318 13:10:57.764609    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  74s (x8 over 75s)  kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	I0318 13:10:57.764657    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    74s (x8 over 75s)  kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	I0318 13:10:57.764657    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     74s (x7 over 75s)  kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	I0318 13:10:57.764657    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	I0318 13:10:57.764704    2404 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-894400 event: Registered Node multinode-894400 in Controller
	I0318 13:10:57.764704    2404 command_runner.go:130] > Name:               multinode-894400-m02
	I0318 13:10:57.764704    2404 command_runner.go:130] > Roles:              <none>
	I0318 13:10:57.764704    2404 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     kubernetes.io/hostname=multinode-894400-m02
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     minikube.k8s.io/name=multinode-894400
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_50_35_0700
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 13:10:57.764704    2404 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 13:10:57.764704    2404 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:50:34 +0000
	I0318 13:10:57.764704    2404 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 13:10:57.764704    2404 command_runner.go:130] > Unschedulable:      false
	I0318 13:10:57.764704    2404 command_runner.go:130] > Lease:
	I0318 13:10:57.764704    2404 command_runner.go:130] >   HolderIdentity:  multinode-894400-m02
	I0318 13:10:57.764704    2404 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 13:10:57.764704    2404 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 13:06:44 +0000
	I0318 13:10:57.764704    2404 command_runner.go:130] > Conditions:
	I0318 13:10:57.764704    2404 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 13:10:57.764704    2404 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 13:10:57.764704    2404 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:10:57.764704    2404 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:10:57.764704    2404 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:10:57.764704    2404 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:10:57.764704    2404 command_runner.go:130] > Addresses:
	I0318 13:10:57.764704    2404 command_runner.go:130] >   InternalIP:  172.30.140.66
	I0318 13:10:57.764704    2404 command_runner.go:130] >   Hostname:    multinode-894400-m02
	I0318 13:10:57.764704    2404 command_runner.go:130] > Capacity:
	I0318 13:10:57.764704    2404 command_runner.go:130] >   cpu:                2
	I0318 13:10:57.764704    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:10:57.764704    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:10:57.764704    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:10:57.764704    2404 command_runner.go:130] >   pods:               110
	I0318 13:10:57.764704    2404 command_runner.go:130] > Allocatable:
	I0318 13:10:57.765246    2404 command_runner.go:130] >   cpu:                2
	I0318 13:10:57.765246    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:10:57.765246    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:10:57.765246    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:10:57.765246    2404 command_runner.go:130] >   pods:               110
	I0318 13:10:57.765246    2404 command_runner.go:130] > System Info:
	I0318 13:10:57.765246    2404 command_runner.go:130] >   Machine ID:                 209753fe156d43e08ee40e815598ed17
	I0318 13:10:57.765246    2404 command_runner.go:130] >   System UUID:                fa19d46a-a3a2-9249-8c21-1edbfcedff01
	I0318 13:10:57.765246    2404 command_runner.go:130] >   Boot ID:                    0e15b7cf-29d6-40f7-ad78-fb04b10bea99
	I0318 13:10:57.765246    2404 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 13:10:57.765246    2404 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 13:10:57.765246    2404 command_runner.go:130] >   Operating System:           linux
	I0318 13:10:57.765246    2404 command_runner.go:130] >   Architecture:               amd64
	I0318 13:10:57.765246    2404 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 13:10:57.765246    2404 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 13:10:57.765246    2404 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 13:10:57.765246    2404 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0318 13:10:57.765451    2404 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0318 13:10:57.765451    2404 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0318 13:10:57.765451    2404 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 13:10:57.765451    2404 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0318 13:10:57.765451    2404 command_runner.go:130] >   default                     busybox-5b5d89c9d6-8btgf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0318 13:10:57.765451    2404 command_runner.go:130] >   kube-system                 kindnet-k5lpg               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0318 13:10:57.765451    2404 command_runner.go:130] >   kube-system                 kube-proxy-8bdmn            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0318 13:10:57.765451    2404 command_runner.go:130] > Allocated resources:
	I0318 13:10:57.765451    2404 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 13:10:57.765451    2404 command_runner.go:130] >   Resource           Requests   Limits
	I0318 13:10:57.765551    2404 command_runner.go:130] >   --------           --------   ------
	I0318 13:10:57.765551    2404 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0318 13:10:57.765551    2404 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0318 13:10:57.765551    2404 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0318 13:10:57.765551    2404 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0318 13:10:57.765551    2404 command_runner.go:130] > Events:
	I0318 13:10:57.765551    2404 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 13:10:57.765551    2404 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 13:10:57.765551    2404 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0318 13:10:57.765551    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x5 over 20m)  kubelet          Node multinode-894400-m02 status is now: NodeHasSufficientMemory
	I0318 13:10:57.765551    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x5 over 20m)  kubelet          Node multinode-894400-m02 status is now: NodeHasNoDiskPressure
	I0318 13:10:57.765645    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x5 over 20m)  kubelet          Node multinode-894400-m02 status is now: NodeHasSufficientPID
	I0318 13:10:57.765645    2404 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller
	I0318 13:10:57.765645    2404 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-894400-m02 status is now: NodeReady
	I0318 13:10:57.765645    2404 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller
	I0318 13:10:57.765645    2404 command_runner.go:130] >   Normal  NodeNotReady             16s                node-controller  Node multinode-894400-m02 status is now: NodeNotReady
	I0318 13:10:57.765645    2404 command_runner.go:130] > Name:               multinode-894400-m03
	I0318 13:10:57.765645    2404 command_runner.go:130] > Roles:              <none>
	I0318 13:10:57.765645    2404 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 13:10:57.765645    2404 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 13:10:57.765729    2404 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 13:10:57.765729    2404 command_runner.go:130] >                     kubernetes.io/hostname=multinode-894400-m03
	I0318 13:10:57.765729    2404 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 13:10:57.765729    2404 command_runner.go:130] >                     minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	I0318 13:10:57.765867    2404 command_runner.go:130] >                     minikube.k8s.io/name=multinode-894400
	I0318 13:10:57.765885    2404 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 13:10:57.765885    2404 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T13_05_26_0700
	I0318 13:10:57.765885    2404 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 13:10:57.765885    2404 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 13:10:57.765885    2404 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 13:10:57.765885    2404 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 13:10:57.765885    2404 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 13:05:25 +0000
	I0318 13:10:57.765885    2404 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 13:10:57.765885    2404 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 13:10:57.765885    2404 command_runner.go:130] > Unschedulable:      false
	I0318 13:10:57.765885    2404 command_runner.go:130] > Lease:
	I0318 13:10:57.765885    2404 command_runner.go:130] >   HolderIdentity:  multinode-894400-m03
	I0318 13:10:57.766064    2404 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 13:10:57.766064    2404 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 13:06:27 +0000
	I0318 13:10:57.766064    2404 command_runner.go:130] > Conditions:
	I0318 13:10:57.766064    2404 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 13:10:57.766064    2404 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 13:10:57.766064    2404 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:10:57.766064    2404 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:10:57.766064    2404 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:10:57.766199    2404 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:10:57.766199    2404 command_runner.go:130] > Addresses:
	I0318 13:10:57.766199    2404 command_runner.go:130] >   InternalIP:  172.30.137.140
	I0318 13:10:57.766199    2404 command_runner.go:130] >   Hostname:    multinode-894400-m03
	I0318 13:10:57.766199    2404 command_runner.go:130] > Capacity:
	I0318 13:10:57.766199    2404 command_runner.go:130] >   cpu:                2
	I0318 13:10:57.766199    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:10:57.766199    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:10:57.766285    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:10:57.766285    2404 command_runner.go:130] >   pods:               110
	I0318 13:10:57.766285    2404 command_runner.go:130] > Allocatable:
	I0318 13:10:57.766285    2404 command_runner.go:130] >   cpu:                2
	I0318 13:10:57.766319    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:10:57.766319    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:10:57.766375    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:10:57.766375    2404 command_runner.go:130] >   pods:               110
	I0318 13:10:57.766422    2404 command_runner.go:130] > System Info:
	I0318 13:10:57.766440    2404 command_runner.go:130] >   Machine ID:                 f96e7421441b46c0a5836e2d53b26708
	I0318 13:10:57.766440    2404 command_runner.go:130] >   System UUID:                7dae14c5-92ae-d842-8ce6-c446c0352eb2
	I0318 13:10:57.766440    2404 command_runner.go:130] >   Boot ID:                    7ef4b157-1893-48d2-9b87-d5f210c11477
	I0318 13:10:57.766440    2404 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 13:10:57.766440    2404 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 13:10:57.766541    2404 command_runner.go:130] >   Operating System:           linux
	I0318 13:10:57.766541    2404 command_runner.go:130] >   Architecture:               amd64
	I0318 13:10:57.766541    2404 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 13:10:57.766541    2404 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 13:10:57.766541    2404 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 13:10:57.766541    2404 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0318 13:10:57.766541    2404 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0318 13:10:57.766600    2404 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0318 13:10:57.766600    2404 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 13:10:57.766600    2404 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0318 13:10:57.766600    2404 command_runner.go:130] >   kube-system                 kindnet-zv9tv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	I0318 13:10:57.766684    2404 command_runner.go:130] >   kube-system                 kube-proxy-745w9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	I0318 13:10:57.766708    2404 command_runner.go:130] > Allocated resources:
	I0318 13:10:57.766708    2404 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 13:10:57.766708    2404 command_runner.go:130] >   Resource           Requests   Limits
	I0318 13:10:57.766708    2404 command_runner.go:130] >   --------           --------   ------
	I0318 13:10:57.766708    2404 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0318 13:10:57.766708    2404 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0318 13:10:57.766708    2404 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0318 13:10:57.766838    2404 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0318 13:10:57.766838    2404 command_runner.go:130] > Events:
	I0318 13:10:57.766879    2404 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0318 13:10:57.766879    2404 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0318 13:10:57.766909    2404 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0318 13:10:57.766909    2404 command_runner.go:130] >   Normal  Starting                 5m29s                  kube-proxy       
	I0318 13:10:57.766909    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  15m (x5 over 15m)      kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientMemory
	I0318 13:10:57.766909    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    15m (x5 over 15m)      kubelet          Node multinode-894400-m03 status is now: NodeHasNoDiskPressure
	I0318 13:10:57.766972    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     15m (x5 over 15m)      kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientPID
	I0318 13:10:57.767002    2404 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-894400-m03 status is now: NodeReady
	I0318 13:10:57.767030    2404 command_runner.go:130] >   Normal  Starting                 5m32s                  kubelet          Starting kubelet.
	I0318 13:10:57.767030    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m32s (x2 over 5m32s)  kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientMemory
	I0318 13:10:57.767030    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m32s (x2 over 5m32s)  kubelet          Node multinode-894400-m03 status is now: NodeHasNoDiskPressure
	I0318 13:10:57.767030    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m32s (x2 over 5m32s)  kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientPID
	I0318 13:10:57.767030    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m32s                  kubelet          Updated Node Allocatable limit across pods
	I0318 13:10:57.767030    2404 command_runner.go:130] >   Normal  RegisteredNode           5m31s                  node-controller  Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller
	I0318 13:10:57.767030    2404 command_runner.go:130] >   Normal  NodeReady                5m23s                  kubelet          Node multinode-894400-m03 status is now: NodeReady
	I0318 13:10:57.767030    2404 command_runner.go:130] >   Normal  NodeNotReady             3m46s                  node-controller  Node multinode-894400-m03 status is now: NodeNotReady
	I0318 13:10:57.767030    2404 command_runner.go:130] >   Normal  RegisteredNode           56s                    node-controller  Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller
	I0318 13:10:57.776599    2404 logs.go:123] Gathering logs for kube-controller-manager [7aa5cf4ec378] ...
	I0318 13:10:57.776599    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa5cf4ec378"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:22.447675       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:22.964394       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:22.964509       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:22.966671       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:22.967091       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:22.968348       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:22.969286       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.391471       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.423488       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.424256       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.424289       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.424374       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.451725       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.451967       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.452425       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.464873       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.465150       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.465172       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.491949       1 shared_informer.go:318] Caches are synced for tokens
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.491900       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.492009       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.492602       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.492659       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 13:10:57.811794    2404 command_runner.go:130] ! E0318 12:47:37.494780       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.494859       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.511992       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.512162       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.512576       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.525022       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.525273       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.525287       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.540701       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.540905       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 13:10:57.813103    2404 command_runner.go:130] ! I0318 12:47:37.540914       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 13:10:57.813436    2404 command_runner.go:130] ! I0318 12:47:37.562000       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 13:10:57.813506    2404 command_runner.go:130] ! I0318 12:47:37.562256       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 13:10:57.813506    2404 command_runner.go:130] ! I0318 12:47:37.562286       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 13:10:57.813506    2404 command_runner.go:130] ! I0318 12:47:37.574397       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 13:10:57.813506    2404 command_runner.go:130] ! I0318 12:47:37.574869       1 expand_controller.go:328] "Starting expand controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.574937       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.587914       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.588166       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.588199       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.609721       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.615354       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.615371       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.624660       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.624898       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.625063       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.637461       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.637588       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.637699       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.649314       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.650380       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.650462       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.830447       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.830565       1 disruption.go:433] "Sending events to api server."
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.830686       1 disruption.go:444] "Starting disruption controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.830725       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.985254       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.985453       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.985784       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:38.288543       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:38.289132       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:38.289248       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:38.289520       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:38.289722       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:38.289927       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:38.290240       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:38.290340       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 13:10:57.815827    2404 command_runner.go:130] ! I0318 12:47:38.290418       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 13:10:57.815827    2404 command_runner.go:130] ! I0318 12:47:38.290502       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 13:10:57.815827    2404 command_runner.go:130] ! I0318 12:47:38.290550       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 13:10:57.815827    2404 command_runner.go:130] ! I0318 12:47:38.290591       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 13:10:57.815827    2404 command_runner.go:130] ! I0318 12:47:38.290851       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 13:10:57.815827    2404 command_runner.go:130] ! I0318 12:47:38.291026       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 13:10:57.816005    2404 command_runner.go:130] ! I0318 12:47:38.291117       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 13:10:57.816005    2404 command_runner.go:130] ! I0318 12:47:38.291149       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 13:10:57.816005    2404 command_runner.go:130] ! I0318 12:47:38.291277       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 13:10:57.816005    2404 command_runner.go:130] ! I0318 12:47:38.291315       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 13:10:57.816005    2404 command_runner.go:130] ! I0318 12:47:38.291392       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 13:10:57.816109    2404 command_runner.go:130] ! I0318 12:47:38.291423       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 13:10:57.816109    2404 command_runner.go:130] ! I0318 12:47:38.291465       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 13:10:57.816109    2404 command_runner.go:130] ! I0318 12:47:38.291591       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 13:10:57.816181    2404 command_runner.go:130] ! I0318 12:47:38.291607       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:10:57.816181    2404 command_runner.go:130] ! I0318 12:47:38.291720       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 13:10:57.816181    2404 command_runner.go:130] ! I0318 12:47:38.436018       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 13:10:57.816181    2404 command_runner.go:130] ! I0318 12:47:38.436093       1 job_controller.go:226] "Starting job controller"
	I0318 13:10:57.816181    2404 command_runner.go:130] ! I0318 12:47:38.436112       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 13:10:57.816271    2404 command_runner.go:130] ! I0318 12:47:38.731490       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 13:10:57.816271    2404 command_runner.go:130] ! I0318 12:47:38.731606       1 horizontal.go:200] "Starting HPA controller"
	I0318 13:10:57.816271    2404 command_runner.go:130] ! I0318 12:47:38.731671       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 13:10:57.816371    2404 command_runner.go:130] ! I0318 12:47:38.886224       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 13:10:57.816438    2404 command_runner.go:130] ! I0318 12:47:38.886401       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 13:10:57.816438    2404 command_runner.go:130] ! I0318 12:47:38.886705       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 13:10:57.816473    2404 command_runner.go:130] ! I0318 12:47:38.930325       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 13:10:57.816473    2404 command_runner.go:130] ! I0318 12:47:38.930354       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 13:10:57.816617    2404 command_runner.go:130] ! I0318 12:47:38.930362       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 13:10:57.816654    2404 command_runner.go:130] ! I0318 12:47:38.930398       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 13:10:57.816654    2404 command_runner.go:130] ! I0318 12:47:39.085782       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 13:10:57.816654    2404 command_runner.go:130] ! I0318 12:47:39.085905       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 13:10:57.816654    2404 command_runner.go:130] ! I0318 12:47:39.085920       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 13:10:57.816720    2404 command_runner.go:130] ! I0318 12:47:39.236755       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 13:10:57.816720    2404 command_runner.go:130] ! I0318 12:47:39.237434       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 13:10:57.816720    2404 command_runner.go:130] ! I0318 12:47:39.237522       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 13:10:57.816720    2404 command_runner.go:130] ! I0318 12:47:39.390953       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 13:10:57.816720    2404 command_runner.go:130] ! I0318 12:47:39.391480       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 13:10:57.816720    2404 command_runner.go:130] ! I0318 12:47:39.391646       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 13:10:57.816720    2404 command_runner.go:130] ! I0318 12:47:39.535570       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 13:10:57.816815    2404 command_runner.go:130] ! I0318 12:47:39.536071       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 13:10:57.816815    2404 command_runner.go:130] ! I0318 12:47:39.536172       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 13:10:57.816815    2404 command_runner.go:130] ! I0318 12:47:39.582776       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 13:10:57.816815    2404 command_runner.go:130] ! I0318 12:47:39.582876       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 13:10:57.816928    2404 command_runner.go:130] ! I0318 12:47:39.582912       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:10:57.816967    2404 command_runner.go:130] ! I0318 12:47:39.584602       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 13:10:57.816967    2404 command_runner.go:130] ! I0318 12:47:39.584677       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 13:10:57.816967    2404 command_runner.go:130] ! I0318 12:47:39.584724       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:10:57.817036    2404 command_runner.go:130] ! I0318 12:47:39.585974       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 13:10:57.817036    2404 command_runner.go:130] ! I0318 12:47:39.585990       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 13:10:57.817036    2404 command_runner.go:130] ! I0318 12:47:39.586012       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:10:57.817036    2404 command_runner.go:130] ! I0318 12:47:39.586910       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 13:10:57.817115    2404 command_runner.go:130] ! I0318 12:47:39.586968       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 13:10:57.817115    2404 command_runner.go:130] ! I0318 12:47:39.586975       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 13:10:57.817115    2404 command_runner.go:130] ! I0318 12:47:39.587044       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:10:57.817115    2404 command_runner.go:130] ! I0318 12:47:39.735265       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 13:10:57.817115    2404 command_runner.go:130] ! I0318 12:47:39.735467       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 13:10:57.817193    2404 command_runner.go:130] ! I0318 12:47:39.735494       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 13:10:57.817193    2404 command_runner.go:130] ! I0318 12:47:39.735502       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 13:10:57.817193    2404 command_runner.go:130] ! I0318 12:47:39.783594       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 13:10:57.817193    2404 command_runner.go:130] ! I0318 12:47:39.783722       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 13:10:57.817193    2404 command_runner.go:130] ! I0318 12:47:39.783841       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 13:10:57.817301    2404 command_runner.go:130] ! I0318 12:47:39.783860       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 13:10:57.817301    2404 command_runner.go:130] ! I0318 12:47:39.784031       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 13:10:57.817301    2404 command_runner.go:130] ! E0318 12:47:39.937206       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 13:10:57.817301    2404 command_runner.go:130] ! I0318 12:47:39.937229       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 13:10:57.817404    2404 command_runner.go:130] ! I0318 12:47:40.089508       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 13:10:57.817404    2404 command_runner.go:130] ! I0318 12:47:40.089701       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 13:10:57.817404    2404 command_runner.go:130] ! I0318 12:47:40.089793       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 13:10:57.817472    2404 command_runner.go:130] ! I0318 12:47:40.235860       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 13:10:57.817472    2404 command_runner.go:130] ! I0318 12:47:40.235977       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 13:10:57.817472    2404 command_runner.go:130] ! I0318 12:47:40.236063       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 13:10:57.817472    2404 command_runner.go:130] ! I0318 12:47:40.386545       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 13:10:57.817472    2404 command_runner.go:130] ! I0318 12:47:40.386692       1 gc_controller.go:101] "Starting GC controller"
	I0318 13:10:57.817472    2404 command_runner.go:130] ! I0318 12:47:40.386704       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 13:10:57.817637    2404 command_runner.go:130] ! I0318 12:47:40.644175       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 13:10:57.817637    2404 command_runner.go:130] ! I0318 12:47:40.644284       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 13:10:57.817637    2404 command_runner.go:130] ! I0318 12:47:40.644293       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 13:10:57.817637    2404 command_runner.go:130] ! I0318 12:47:40.784991       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 13:10:57.817637    2404 command_runner.go:130] ! I0318 12:47:40.785464       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 13:10:57.817703    2404 command_runner.go:130] ! I0318 12:47:40.785492       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 13:10:57.817703    2404 command_runner.go:130] ! I0318 12:47:40.936785       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 13:10:57.817703    2404 command_runner.go:130] ! I0318 12:47:40.939800       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 13:10:57.817770    2404 command_runner.go:130] ! I0318 12:47:40.947184       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:10:57.817770    2404 command_runner.go:130] ! I0318 12:47:40.968017       1 shared_informer.go:318] Caches are synced for service account
	I0318 13:10:57.817770    2404 command_runner.go:130] ! I0318 12:47:40.971773       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:10:57.817770    2404 command_runner.go:130] ! I0318 12:47:40.976691       1 shared_informer.go:318] Caches are synced for expand
	I0318 13:10:57.817770    2404 command_runner.go:130] ! I0318 12:47:40.986014       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 13:10:57.817770    2404 command_runner.go:130] ! I0318 12:47:40.995675       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 13:10:57.817836    2404 command_runner.go:130] ! I0318 12:47:41.009015       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400\" does not exist"
	I0318 13:10:57.817836    2404 command_runner.go:130] ! I0318 12:47:41.012612       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 13:10:57.817836    2404 command_runner.go:130] ! I0318 12:47:41.016383       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 13:10:57.817836    2404 command_runner.go:130] ! I0318 12:47:41.025198       1 shared_informer.go:318] Caches are synced for TTL
	I0318 13:10:57.817921    2404 command_runner.go:130] ! I0318 12:47:41.025462       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 13:10:57.817921    2404 command_runner.go:130] ! I0318 12:47:41.032086       1 shared_informer.go:318] Caches are synced for HPA
	I0318 13:10:57.817921    2404 command_runner.go:130] ! I0318 12:47:41.036463       1 shared_informer.go:318] Caches are synced for job
	I0318 13:10:57.817921    2404 command_runner.go:130] ! I0318 12:47:41.036622       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 13:10:57.817921    2404 command_runner.go:130] ! I0318 12:47:41.036726       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 13:10:57.817992    2404 command_runner.go:130] ! I0318 12:47:41.037735       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 13:10:57.817992    2404 command_runner.go:130] ! I0318 12:47:41.037818       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 13:10:57.817992    2404 command_runner.go:130] ! I0318 12:47:41.040360       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 13:10:57.818064    2404 command_runner.go:130] ! I0318 12:47:41.041850       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 13:10:57.818064    2404 command_runner.go:130] ! I0318 12:47:41.045379       1 shared_informer.go:318] Caches are synced for namespace
	I0318 13:10:57.818064    2404 command_runner.go:130] ! I0318 12:47:41.051530       1 shared_informer.go:318] Caches are synced for deployment
	I0318 13:10:57.818064    2404 command_runner.go:130] ! I0318 12:47:41.053151       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 13:10:57.818064    2404 command_runner.go:130] ! I0318 12:47:41.063027       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 13:10:57.818064    2404 command_runner.go:130] ! I0318 12:47:41.084212       1 shared_informer.go:318] Caches are synced for taint
	I0318 13:10:57.818135    2404 command_runner.go:130] ! I0318 12:47:41.084612       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 13:10:57.818135    2404 command_runner.go:130] ! I0318 12:47:41.087983       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 13:10:57.818135    2404 command_runner.go:130] ! I0318 12:47:41.088464       1 taint_manager.go:210] "Sending events to api server"
	I0318 13:10:57.818345    2404 command_runner.go:130] ! I0318 12:47:41.089485       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400"
	I0318 13:10:57.818429    2404 command_runner.go:130] ! I0318 12:47:41.089526       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0318 13:10:57.818429    2404 command_runner.go:130] ! I0318 12:47:41.089552       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 13:10:57.818429    2404 command_runner.go:130] ! I0318 12:47:41.089942       1 shared_informer.go:318] Caches are synced for GC
	I0318 13:10:57.818513    2404 command_runner.go:130] ! I0318 12:47:41.090031       1 event.go:307] "Event occurred" object="multinode-894400" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400 event: Registered Node multinode-894400 in Controller"
	I0318 13:10:57.818601    2404 command_runner.go:130] ! I0318 12:47:41.090167       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 13:10:57.818669    2404 command_runner.go:130] ! I0318 12:47:41.090848       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 13:10:57.818669    2404 command_runner.go:130] ! I0318 12:47:41.092093       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 13:10:57.818669    2404 command_runner.go:130] ! I0318 12:47:41.092684       1 shared_informer.go:318] Caches are synced for node
	I0318 13:10:57.818669    2404 command_runner.go:130] ! I0318 12:47:41.093255       1 range_allocator.go:174] "Sending events to api server"
	I0318 13:10:57.818669    2404 command_runner.go:130] ! I0318 12:47:41.093537       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 13:10:57.818738    2404 command_runner.go:130] ! I0318 12:47:41.093851       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 13:10:57.818738    2404 command_runner.go:130] ! I0318 12:47:41.093958       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 13:10:57.818738    2404 command_runner.go:130] ! I0318 12:47:41.119414       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400" podCIDRs=["10.244.0.0/24"]
	I0318 13:10:57.818738    2404 command_runner.go:130] ! I0318 12:47:41.148134       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:10:57.818738    2404 command_runner.go:130] ! I0318 12:47:41.183853       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 13:10:57.818809    2404 command_runner.go:130] ! I0318 12:47:41.184949       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 13:10:57.818809    2404 command_runner.go:130] ! I0318 12:47:41.186043       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 13:10:57.818809    2404 command_runner.go:130] ! I0318 12:47:41.187192       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 13:10:57.818809    2404 command_runner.go:130] ! I0318 12:47:41.187229       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 13:10:57.818933    2404 command_runner.go:130] ! I0318 12:47:41.192066       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:10:57.818933    2404 command_runner.go:130] ! I0318 12:47:41.233781       1 shared_informer.go:318] Caches are synced for disruption
	I0318 13:10:57.818933    2404 command_runner.go:130] ! I0318 12:47:41.572914       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:10:57.818933    2404 command_runner.go:130] ! I0318 12:47:41.612936       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mc5tv"
	I0318 13:10:57.819040    2404 command_runner.go:130] ! I0318 12:47:41.615780       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hhsxh"
	I0318 13:10:57.819040    2404 command_runner.go:130] ! I0318 12:47:41.625871       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:10:57.819040    2404 command_runner.go:130] ! I0318 12:47:41.626335       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 13:10:57.819116    2404 command_runner.go:130] ! I0318 12:47:41.893141       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0318 13:10:57.819116    2404 command_runner.go:130] ! I0318 12:47:42.112244       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vl6jr"
	I0318 13:10:57.819206    2404 command_runner.go:130] ! I0318 12:47:42.148022       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-456tm"
	I0318 13:10:57.819206    2404 command_runner.go:130] ! I0318 12:47:42.181940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="289.6659ms"
	I0318 13:10:57.819206    2404 command_runner.go:130] ! I0318 12:47:42.245823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.840303ms"
	I0318 13:10:57.819206    2404 command_runner.go:130] ! I0318 12:47:42.246151       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.996µs"
	I0318 13:10:57.819297    2404 command_runner.go:130] ! I0318 12:47:42.470958       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0318 13:10:57.819297    2404 command_runner.go:130] ! I0318 12:47:42.530265       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-vl6jr"
	I0318 13:10:57.819297    2404 command_runner.go:130] ! I0318 12:47:42.551794       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.491503ms"
	I0318 13:10:57.819297    2404 command_runner.go:130] ! I0318 12:47:42.587026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="35.184179ms"
	I0318 13:10:57.819297    2404 command_runner.go:130] ! I0318 12:47:42.587126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.497µs"
	I0318 13:10:57.819427    2404 command_runner.go:130] ! I0318 12:47:52.958102       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="163.297µs"
	I0318 13:10:57.819427    2404 command_runner.go:130] ! I0318 12:47:52.991751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.399µs"
	I0318 13:10:57.819427    2404 command_runner.go:130] ! I0318 12:47:54.194916       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.289µs"
	I0318 13:10:57.819427    2404 command_runner.go:130] ! I0318 12:47:55.238088       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.595936ms"
	I0318 13:10:57.819502    2404 command_runner.go:130] ! I0318 12:47:55.238222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="45.592µs"
	I0318 13:10:57.819502    2404 command_runner.go:130] ! I0318 12:47:56.090728       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0318 13:10:57.819502    2404 command_runner.go:130] ! I0318 12:50:34.419485       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m02\" does not exist"
	I0318 13:10:57.819608    2404 command_runner.go:130] ! I0318 12:50:34.437576       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m02" podCIDRs=["10.244.1.0/24"]
	I0318 13:10:57.819608    2404 command_runner.go:130] ! I0318 12:50:34.454919       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8bdmn"
	I0318 13:10:57.819608    2404 command_runner.go:130] ! I0318 12:50:34.479103       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k5lpg"
	I0318 13:10:57.819678    2404 command_runner.go:130] ! I0318 12:50:36.121925       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m02"
	I0318 13:10:57.819678    2404 command_runner.go:130] ! I0318 12:50:36.122368       1 event.go:307] "Event occurred" object="multinode-894400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller"
	I0318 13:10:57.819678    2404 command_runner.go:130] ! I0318 12:50:52.539955       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.819750    2404 command_runner.go:130] ! I0318 12:51:17.964827       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0318 13:10:57.819750    2404 command_runner.go:130] ! I0318 12:51:17.986964       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-8btgf"
	I0318 13:10:57.819836    2404 command_runner.go:130] ! I0318 12:51:18.004592       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-c2997"
	I0318 13:10:57.819836    2404 command_runner.go:130] ! I0318 12:51:18.026894       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="62.79508ms"
	I0318 13:10:57.819836    2404 command_runner.go:130] ! I0318 12:51:18.045074       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="17.513513ms"
	I0318 13:10:57.819836    2404 command_runner.go:130] ! I0318 12:51:18.046404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="36.101µs"
	I0318 13:10:57.819836    2404 command_runner.go:130] ! I0318 12:51:18.054157       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="337.914µs"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:51:18.060516       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="26.701µs"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:51:20.804047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="10.125602ms"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:51:20.804333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="159.502µs"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:51:21.064706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="11.788417ms"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:51:21.065229       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="82.401µs"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:55:05.793350       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m03\" does not exist"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:55:05.797095       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:55:05.823205       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zv9tv"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:55:05.835101       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m03" podCIDRs=["10.244.2.0/24"]
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:55:05.835149       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-745w9"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:55:06.188986       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:55:06.188988       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m03"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:55:23.671742       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:02:46.325539       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:02:46.325935       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-894400-m03 status is now: NodeNotReady"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:02:46.344510       1 event.go:307] "Event occurred" object="kube-system/kindnet-zv9tv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:02:46.368811       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-745w9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:05:19.649225       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:05:21.403124       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-894400-m03 event: Removing Node multinode-894400-m03 from Controller"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:05:25.832056       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m03\" does not exist"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:05:25.832348       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:05:25.841443       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m03" podCIDRs=["10.244.3.0/24"]
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:05:26.404299       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:05:34.080951       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.820535    2404 command_runner.go:130] ! I0318 13:07:11.961036       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-894400-m03 status is now: NodeNotReady"
	I0318 13:10:57.820535    2404 command_runner.go:130] ! I0318 13:07:11.961077       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.820535    2404 command_runner.go:130] ! I0318 13:07:12.051526       1 event.go:307] "Event occurred" object="kube-system/kindnet-zv9tv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:10:57.820535    2404 command_runner.go:130] ! I0318 13:07:12.098168       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-745w9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:10:57.839272    2404 logs.go:123] Gathering logs for dmesg ...
	I0318 13:10:57.839272    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:10:57.860537    2404 command_runner.go:130] > [Mar18 13:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0318 13:10:57.860580    2404 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0318 13:10:57.860580    2404 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0318 13:10:57.860580    2404 command_runner.go:130] > [  +0.127438] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0318 13:10:57.860580    2404 command_runner.go:130] > [  +0.022457] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0318 13:10:57.860580    2404 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0318 13:10:57.860724    2404 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0318 13:10:57.860724    2404 command_runner.go:130] > [  +0.054196] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0318 13:10:57.860724    2404 command_runner.go:130] > [  +0.018424] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0318 13:10:57.860764    2404 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0318 13:10:57.860764    2404 command_runner.go:130] > [  +4.800453] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0318 13:10:57.860812    2404 command_runner.go:130] > [  +1.267636] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0318 13:10:57.860812    2404 command_runner.go:130] > [  +1.056053] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	I0318 13:10:57.860812    2404 command_runner.go:130] > [  +6.778211] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0318 13:10:57.860848    2404 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0318 13:10:57.860848    2404 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0318 13:10:57.860848    2404 command_runner.go:130] > [Mar18 13:09] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	I0318 13:10:57.860895    2404 command_runner.go:130] > [  +0.160643] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	I0318 13:10:57.860895    2404 command_runner.go:130] > [ +25.236158] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0318 13:10:57.860930    2404 command_runner.go:130] > [  +0.093711] kauditd_printk_skb: 73 callbacks suppressed
	I0318 13:10:57.860998    2404 command_runner.go:130] > [  +0.488652] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	I0318 13:10:57.860998    2404 command_runner.go:130] > [  +0.198307] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	I0318 13:10:57.860998    2404 command_runner.go:130] > [  +0.213157] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	I0318 13:10:57.860998    2404 command_runner.go:130] > [  +2.866452] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	I0318 13:10:57.861051    2404 command_runner.go:130] > [  +0.191537] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	I0318 13:10:57.861051    2404 command_runner.go:130] > [  +0.163904] systemd-fstab-generator[1255]: Ignoring "noauto" option for root device
	I0318 13:10:57.861092    2404 command_runner.go:130] > [  +0.280650] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	I0318 13:10:57.861092    2404 command_runner.go:130] > [  +0.822319] systemd-fstab-generator[1393]: Ignoring "noauto" option for root device
	I0318 13:10:57.861092    2404 command_runner.go:130] > [  +0.094744] kauditd_printk_skb: 205 callbacks suppressed
	I0318 13:10:57.861127    2404 command_runner.go:130] > [  +3.177820] systemd-fstab-generator[1525]: Ignoring "noauto" option for root device
	I0318 13:10:57.861127    2404 command_runner.go:130] > [  +1.898187] kauditd_printk_skb: 64 callbacks suppressed
	I0318 13:10:57.861127    2404 command_runner.go:130] > [  +5.227041] kauditd_printk_skb: 10 callbacks suppressed
	I0318 13:10:57.861175    2404 command_runner.go:130] > [  +4.065141] systemd-fstab-generator[3089]: Ignoring "noauto" option for root device
	I0318 13:10:57.861175    2404 command_runner.go:130] > [Mar18 13:10] kauditd_printk_skb: 70 callbacks suppressed
	I0318 13:11:00.382100    2404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:11:00.407835    2404 command_runner.go:130] > 1904
	I0318 13:11:00.407943    2404 api_server.go:72] duration metric: took 1m6.7492667s to wait for apiserver process to appear ...
	I0318 13:11:00.407943    2404 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:11:00.420009    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:11:00.446185    2404 command_runner.go:130] > fc4430c7fa20
	I0318 13:11:00.446185    2404 logs.go:276] 1 containers: [fc4430c7fa20]
	I0318 13:11:00.455344    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:11:00.478051    2404 command_runner.go:130] > 5f0887d1e691
	I0318 13:11:00.479399    2404 logs.go:276] 1 containers: [5f0887d1e691]
	I0318 13:11:00.487853    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:11:00.511810    2404 command_runner.go:130] > 3c3bc988c74c
	I0318 13:11:00.511840    2404 command_runner.go:130] > 693a64f7472f
	I0318 13:11:00.511840    2404 logs.go:276] 2 containers: [3c3bc988c74c 693a64f7472f]
	I0318 13:11:00.520471    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:11:00.543394    2404 command_runner.go:130] > 66ee8be9fada
	I0318 13:11:00.543394    2404 command_runner.go:130] > e4d42739ce0e
	I0318 13:11:00.543394    2404 logs.go:276] 2 containers: [66ee8be9fada e4d42739ce0e]
	I0318 13:11:00.552571    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:11:00.573605    2404 command_runner.go:130] > 163ccabc3882
	I0318 13:11:00.573605    2404 command_runner.go:130] > 9335855aab63
	I0318 13:11:00.574793    2404 logs.go:276] 2 containers: [163ccabc3882 9335855aab63]
	I0318 13:11:00.585270    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:11:00.611801    2404 command_runner.go:130] > 4ad6784a187d
	I0318 13:11:00.611801    2404 command_runner.go:130] > 7aa5cf4ec378
	I0318 13:11:00.611801    2404 logs.go:276] 2 containers: [4ad6784a187d 7aa5cf4ec378]
	I0318 13:11:00.620758    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:11:00.644616    2404 command_runner.go:130] > c8e5ec25e910
	I0318 13:11:00.644798    2404 command_runner.go:130] > c4d7018ad23a
	I0318 13:11:00.644798    2404 logs.go:276] 2 containers: [c8e5ec25e910 c4d7018ad23a]
	I0318 13:11:00.644798    2404 logs.go:123] Gathering logs for kindnet [c4d7018ad23a] ...
	I0318 13:11:00.644798    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d7018ad23a"
	I0318 13:11:00.677915    2404 command_runner.go:130] ! I0318 12:56:20.031595       1 main.go:227] handling current node
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:20.031610       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:20.031618       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:20.031800       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:20.031837       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:30.038705       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:30.038812       1 main.go:227] handling current node
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:30.038826       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:30.038833       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:30.039027       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:30.039347       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:40.051950       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:40.052053       1 main.go:227] handling current node
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:40.052086       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:40.052204       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:40.052568       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:40.052681       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:50.074059       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:50.074164       1 main.go:227] handling current node
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:50.074183       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:50.074192       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:50.075009       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:50.075306       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:57:00.089286       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:57:00.089382       1 main.go:227] handling current node
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:57:00.089397       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:57:00.089405       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:57:00.089918       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:57:00.089934       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:57:10.103457       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:57:10.103575       1 main.go:227] handling current node
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:57:10.103607       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680647    2404 command_runner.go:130] ! I0318 12:57:10.103704       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680647    2404 command_runner.go:130] ! I0318 12:57:10.104106       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680647    2404 command_runner.go:130] ! I0318 12:57:10.104144       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680647    2404 command_runner.go:130] ! I0318 12:57:20.111225       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680692    2404 command_runner.go:130] ! I0318 12:57:20.111346       1 main.go:227] handling current node
	I0318 13:11:00.680692    2404 command_runner.go:130] ! I0318 12:57:20.111360       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680692    2404 command_runner.go:130] ! I0318 12:57:20.111367       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:20.111695       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:20.111775       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:30.124283       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:30.124477       1 main.go:227] handling current node
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:30.124495       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:30.124505       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:30.125279       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:30.125393       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:40.137523       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:40.137766       1 main.go:227] handling current node
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:40.137807       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:40.137833       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:40.137998       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:40.138087       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:50.149548       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:50.149697       1 main.go:227] handling current node
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:50.149712       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:50.149720       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:50.150251       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:50.150344       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:00.159094       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:00.159284       1 main.go:227] handling current node
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:00.159340       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:00.159700       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:00.160303       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:00.160346       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:10.177603       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:10.177780       1 main.go:227] handling current node
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:10.178122       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:10.178166       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:10.178455       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:10.178497       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:20.196110       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:20.196144       1 main.go:227] handling current node
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:20.196236       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:20.196542       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:20.196774       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:20.196867       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.681250    2404 command_runner.go:130] ! I0318 12:58:30.204485       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.681250    2404 command_runner.go:130] ! I0318 12:58:30.204515       1 main.go:227] handling current node
	I0318 13:11:00.681250    2404 command_runner.go:130] ! I0318 12:58:30.204528       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.681307    2404 command_runner.go:130] ! I0318 12:58:30.204556       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.681307    2404 command_runner.go:130] ! I0318 12:58:30.204856       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.681307    2404 command_runner.go:130] ! I0318 12:58:30.205022       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.681307    2404 command_runner.go:130] ! I0318 12:58:40.221076       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.681307    2404 command_runner.go:130] ! I0318 12:58:40.221184       1 main.go:227] handling current node
	I0318 13:11:00.681307    2404 command_runner.go:130] ! I0318 12:58:40.221201       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.681403    2404 command_runner.go:130] ! I0318 12:58:40.221210       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.681403    2404 command_runner.go:130] ! I0318 12:58:40.221741       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.681403    2404 command_runner.go:130] ! I0318 12:58:40.221769       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.681403    2404 command_runner.go:130] ! I0318 12:58:50.229210       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.681403    2404 command_runner.go:130] ! I0318 12:58:50.229302       1 main.go:227] handling current node
	I0318 13:11:00.681454    2404 command_runner.go:130] ! I0318 12:58:50.229317       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.681454    2404 command_runner.go:130] ! I0318 12:58:50.229324       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:58:50.229703       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:58:50.229807       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:00.244905       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:00.244992       1 main.go:227] handling current node
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:00.245007       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:00.245033       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:00.245480       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:00.245600       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:10.253460       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:10.253563       1 main.go:227] handling current node
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:10.253579       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:10.253605       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:10.254199       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:10.254310       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:20.270774       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:20.270870       1 main.go:227] handling current node
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:20.270886       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:20.270894       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:20.271275       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:20.271367       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:30.281784       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:30.281809       1 main.go:227] handling current node
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:30.281819       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:30.281824       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:30.282361       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:30.282392       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:40.291176       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:40.291304       1 main.go:227] handling current node
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:40.291321       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:40.291328       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:40.291827       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:40.291857       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:50.303374       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:50.303454       1 main.go:227] handling current node
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:50.303468       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:50.303476       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:50.303974       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:50.304002       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 13:00:00.311317       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 13:00:00.311423       1 main.go:227] handling current node
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 13:00:00.311441       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 13:00:00.311449       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 13:00:00.312039       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.682012    2404 command_runner.go:130] ! I0318 13:00:00.312135       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.682012    2404 command_runner.go:130] ! I0318 13:00:10.324823       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.682012    2404 command_runner.go:130] ! I0318 13:00:10.324902       1 main.go:227] handling current node
	I0318 13:11:00.682057    2404 command_runner.go:130] ! I0318 13:00:10.324915       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.682057    2404 command_runner.go:130] ! I0318 13:00:10.324926       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.682057    2404 command_runner.go:130] ! I0318 13:00:10.325084       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.682057    2404 command_runner.go:130] ! I0318 13:00:10.325108       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.682111    2404 command_runner.go:130] ! I0318 13:00:20.338195       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.682111    2404 command_runner.go:130] ! I0318 13:00:20.338297       1 main.go:227] handling current node
	I0318 13:11:00.682153    2404 command_runner.go:130] ! I0318 13:00:20.338312       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.682174    2404 command_runner.go:130] ! I0318 13:00:20.338320       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.682197    2404 command_runner.go:130] ! I0318 13:00:20.338525       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.682197    2404 command_runner.go:130] ! I0318 13:00:20.338601       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.682224    2404 command_runner.go:130] ! I0318 13:00:30.345095       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.682224    2404 command_runner.go:130] ! I0318 13:00:30.345184       1 main.go:227] handling current node
	I0318 13:11:00.682258    2404 command_runner.go:130] ! I0318 13:00:30.345198       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.682258    2404 command_runner.go:130] ! I0318 13:00:30.345205       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.682328    2404 command_runner.go:130] ! I0318 13:00:30.346074       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.682328    2404 command_runner.go:130] ! I0318 13:00:30.346194       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:40.357007       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:40.357386       1 main.go:227] handling current node
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:40.357485       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:40.357513       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:40.357737       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:40.357766       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:50.372182       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:50.372221       1 main.go:227] handling current node
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:50.372235       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:50.372242       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:50.372608       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:50.372772       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:00.386990       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:00.387036       1 main.go:227] handling current node
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:00.387050       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:00.387058       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:00.387182       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:00.387191       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:10.396889       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:10.396930       1 main.go:227] handling current node
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:10.396942       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:10.396948       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:10.397250       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:10.397343       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:20.413272       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:20.413371       1 main.go:227] handling current node
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:20.413386       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:20.413395       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:20.413968       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:20.413999       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:30.429160       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:30.429478       1 main.go:227] handling current node
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:30.429549       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:30.429678       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:30.429960       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:30.430034       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.682932    2404 command_runner.go:130] ! I0318 13:01:40.436733       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.682932    2404 command_runner.go:130] ! I0318 13:01:40.436839       1 main.go:227] handling current node
	I0318 13:11:00.682932    2404 command_runner.go:130] ! I0318 13:01:40.436901       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.682932    2404 command_runner.go:130] ! I0318 13:01:40.436930       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.682932    2404 command_runner.go:130] ! I0318 13:01:40.437399       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683000    2404 command_runner.go:130] ! I0318 13:01:40.437431       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683000    2404 command_runner.go:130] ! I0318 13:01:50.451622       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683000    2404 command_runner.go:130] ! I0318 13:01:50.451802       1 main.go:227] handling current node
	I0318 13:11:00.683000    2404 command_runner.go:130] ! I0318 13:01:50.451849       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683000    2404 command_runner.go:130] ! I0318 13:01:50.451860       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.683000    2404 command_runner.go:130] ! I0318 13:01:50.452021       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683095    2404 command_runner.go:130] ! I0318 13:01:50.452171       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683095    2404 command_runner.go:130] ! I0318 13:02:00.460452       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683095    2404 command_runner.go:130] ! I0318 13:02:00.460548       1 main.go:227] handling current node
	I0318 13:11:00.683095    2404 command_runner.go:130] ! I0318 13:02:00.460563       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683095    2404 command_runner.go:130] ! I0318 13:02:00.460571       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.683095    2404 command_runner.go:130] ! I0318 13:02:00.461181       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683161    2404 command_runner.go:130] ! I0318 13:02:00.461333       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683183    2404 command_runner.go:130] ! I0318 13:02:10.474274       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683227    2404 command_runner.go:130] ! I0318 13:02:10.474396       1 main.go:227] handling current node
	I0318 13:11:00.683254    2404 command_runner.go:130] ! I0318 13:02:10.474427       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683254    2404 command_runner.go:130] ! I0318 13:02:10.474436       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:10.475019       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:10.475159       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:20.489442       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:20.489616       1 main.go:227] handling current node
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:20.489699       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:20.489752       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:20.490046       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:20.490082       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:30.497474       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:30.497574       1 main.go:227] handling current node
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:30.497589       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:30.497597       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:30.498279       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:30.498361       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:40.512026       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:40.512345       1 main.go:227] handling current node
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:40.512385       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:40.512477       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:40.512786       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:40.512873       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:50.520110       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:50.520239       1 main.go:227] handling current node
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:50.520254       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:50.520263       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:50.520784       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:50.520861       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:00.531866       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:00.531958       1 main.go:227] handling current node
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:00.531972       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:00.531979       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:00.532211       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:00.532293       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:10.543869       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:10.543913       1 main.go:227] handling current node
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:10.543926       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:10.543933       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:10.544294       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:10.544430       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683802    2404 command_runner.go:130] ! I0318 13:03:20.558742       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683802    2404 command_runner.go:130] ! I0318 13:03:20.558782       1 main.go:227] handling current node
	I0318 13:11:00.683802    2404 command_runner.go:130] ! I0318 13:03:20.558795       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683802    2404 command_runner.go:130] ! I0318 13:03:20.558802       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.683867    2404 command_runner.go:130] ! I0318 13:03:20.558992       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683965    2404 command_runner.go:130] ! I0318 13:03:20.559009       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683965    2404 command_runner.go:130] ! I0318 13:03:30.568771       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683965    2404 command_runner.go:130] ! I0318 13:03:30.568872       1 main.go:227] handling current node
	I0318 13:11:00.683965    2404 command_runner.go:130] ! I0318 13:03:30.568905       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683965    2404 command_runner.go:130] ! I0318 13:03:30.568996       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684033    2404 command_runner.go:130] ! I0318 13:03:30.569367       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684057    2404 command_runner.go:130] ! I0318 13:03:30.569450       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684057    2404 command_runner.go:130] ! I0318 13:03:40.587554       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684057    2404 command_runner.go:130] ! I0318 13:03:40.587674       1 main.go:227] handling current node
	I0318 13:11:00.684094    2404 command_runner.go:130] ! I0318 13:03:40.588337       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.684094    2404 command_runner.go:130] ! I0318 13:03:40.588356       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684121    2404 command_runner.go:130] ! I0318 13:03:40.588758       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684121    2404 command_runner.go:130] ! I0318 13:03:40.588836       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684121    2404 command_runner.go:130] ! I0318 13:03:50.596331       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684121    2404 command_runner.go:130] ! I0318 13:03:50.596438       1 main.go:227] handling current node
	I0318 13:11:00.684121    2404 command_runner.go:130] ! I0318 13:03:50.596453       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.684181    2404 command_runner.go:130] ! I0318 13:03:50.596462       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684258    2404 command_runner.go:130] ! I0318 13:03:50.596942       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684315    2404 command_runner.go:130] ! I0318 13:03:50.597079       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684315    2404 command_runner.go:130] ! I0318 13:04:00.611242       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684315    2404 command_runner.go:130] ! I0318 13:04:00.611383       1 main.go:227] handling current node
	I0318 13:11:00.684359    2404 command_runner.go:130] ! I0318 13:04:00.611397       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.684359    2404 command_runner.go:130] ! I0318 13:04:00.611405       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684359    2404 command_runner.go:130] ! I0318 13:04:00.611541       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684359    2404 command_runner.go:130] ! I0318 13:04:00.611572       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684418    2404 command_runner.go:130] ! I0318 13:04:10.624814       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684418    2404 command_runner.go:130] ! I0318 13:04:10.624904       1 main.go:227] handling current node
	I0318 13:11:00.684441    2404 command_runner.go:130] ! I0318 13:04:10.624920       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:10.624927       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:10.625504       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:10.625547       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:20.640319       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:20.640364       1 main.go:227] handling current node
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:20.640379       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:20.640386       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:20.640865       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:20.640901       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:30.648021       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:30.648134       1 main.go:227] handling current node
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:30.648148       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:30.648156       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:30.648313       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:30.648344       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:40.663577       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:40.663749       1 main.go:227] handling current node
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:40.663765       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:40.663774       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:40.663896       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:40.663929       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:50.669717       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:50.669791       1 main.go:227] handling current node
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:50.669805       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:50.669812       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:50.670128       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:50.670230       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:00.686596       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:00.686809       1 main.go:227] handling current node
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:00.686942       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:00.687116       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:00.687370       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:00.687441       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:10.704297       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:10.704404       1 main.go:227] handling current node
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:10.704426       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:10.704555       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:10.704810       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:10.704878       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:20.722958       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:20.723127       1 main.go:227] handling current node
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:20.723145       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685009    2404 command_runner.go:130] ! I0318 13:05:20.723159       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685065    2404 command_runner.go:130] ! I0318 13:05:30.731764       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685065    2404 command_runner.go:130] ! I0318 13:05:30.731841       1 main.go:227] handling current node
	I0318 13:11:00.685065    2404 command_runner.go:130] ! I0318 13:05:30.731854       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685109    2404 command_runner.go:130] ! I0318 13:05:30.731861       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685109    2404 command_runner.go:130] ! I0318 13:05:30.732029       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685174    2404 command_runner.go:130] ! I0318 13:05:30.732163       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685174    2404 command_runner.go:130] ! I0318 13:05:30.732544       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.30.137.140 Flags: [] Table: 0} 
	I0318 13:11:00.685174    2404 command_runner.go:130] ! I0318 13:05:40.739849       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685233    2404 command_runner.go:130] ! I0318 13:05:40.739939       1 main.go:227] handling current node
	I0318 13:11:00.685233    2404 command_runner.go:130] ! I0318 13:05:40.739953       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685255    2404 command_runner.go:130] ! I0318 13:05:40.739960       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685255    2404 command_runner.go:130] ! I0318 13:05:40.740081       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:05:40.740151       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:05:50.748036       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:05:50.748465       1 main.go:227] handling current node
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:05:50.748942       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:05:50.749055       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:05:50.749287       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:05:50.749413       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:00.757350       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:00.757434       1 main.go:227] handling current node
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:00.757452       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:00.757460       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:00.757853       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:00.758194       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:10.766768       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:10.766886       1 main.go:227] handling current node
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:10.766901       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:10.766910       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:10.767143       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:10.767175       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:20.773530       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:20.773656       1 main.go:227] handling current node
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:20.773729       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:20.773741       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:20.774155       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:20.774478       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:30.792219       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:30.792349       1 main.go:227] handling current node
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:30.792364       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:30.792373       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:30.792864       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:30.792901       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:40.809219       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:40.809451       1 main.go:227] handling current node
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:40.809484       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:40.809508       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685877    2404 command_runner.go:130] ! I0318 13:06:40.809841       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685877    2404 command_runner.go:130] ! I0318 13:06:40.810075       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685877    2404 command_runner.go:130] ! I0318 13:06:50.822556       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:06:50.822612       1 main.go:227] handling current node
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:06:50.822667       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:06:50.822680       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:06:50.822925       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:06:50.823171       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:00.837923       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:00.838008       1 main.go:227] handling current node
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:00.838022       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:00.838030       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:00.838429       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:00.838666       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:10.854207       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:10.854411       1 main.go:227] handling current node
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:10.854444       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:10.854469       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:10.854879       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:10.855094       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:20.861534       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:20.861671       1 main.go:227] handling current node
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:20.861685       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:20.861692       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:20.861818       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:20.861845       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.707088    2404 logs.go:123] Gathering logs for container status ...
	I0318 13:11:00.707088    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:11:00.800084    2404 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0318 13:11:00.800257    2404 command_runner.go:130] > c5d2074be239f       8c811b4aec35f                                                                                         7 seconds ago        Running             busybox                   1                   e20878b8092c2       busybox-5b5d89c9d6-c2997
	I0318 13:11:00.800257    2404 command_runner.go:130] > 3c3bc988c74cd       ead0a4a53df89                                                                                         7 seconds ago        Running             coredns                   1                   97583cc14f115       coredns-5dd5756b68-456tm
	I0318 13:11:00.800257    2404 command_runner.go:130] > eadcf41dad509       6e38f40d628db                                                                                         25 seconds ago       Running             storage-provisioner       2                   41035eff3b7db       storage-provisioner
	I0318 13:11:00.800257    2404 command_runner.go:130] > c8e5ec25e910e       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   86d74dec812cf       kindnet-hhsxh
	I0318 13:11:00.800369    2404 command_runner.go:130] > 46c0cf90d385f       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   41035eff3b7db       storage-provisioner
	I0318 13:11:00.800369    2404 command_runner.go:130] > 163ccabc3882a       83f6cc407eed8                                                                                         About a minute ago   Running             kube-proxy                1                   a9f21749669fe       kube-proxy-mc5tv
	I0318 13:11:00.800369    2404 command_runner.go:130] > 5f0887d1e6913       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      0                   354f3c44a34fc       etcd-multinode-894400
	I0318 13:11:00.800369    2404 command_runner.go:130] > 66ee8be9fada7       e3db313c6dbc0                                                                                         About a minute ago   Running             kube-scheduler            1                   6fb3325d3c100       kube-scheduler-multinode-894400
	I0318 13:11:00.800369    2404 command_runner.go:130] > fc4430c7fa204       7fe0e6f37db33                                                                                         About a minute ago   Running             kube-apiserver            0                   bc7236a19957e       kube-apiserver-multinode-894400
	I0318 13:11:00.800369    2404 command_runner.go:130] > 4ad6784a187d6       d058aa5ab969c                                                                                         About a minute ago   Running             kube-controller-manager   1                   066206d4c52cb       kube-controller-manager-multinode-894400
	I0318 13:11:00.800637    2404 command_runner.go:130] > dd031b5cb1e85       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   a23c1189be7c3       busybox-5b5d89c9d6-c2997
	I0318 13:11:00.800637    2404 command_runner.go:130] > 693a64f7472fd       ead0a4a53df89                                                                                         23 minutes ago       Exited              coredns                   0                   d001e299e996b       coredns-5dd5756b68-456tm
	I0318 13:11:00.800637    2404 command_runner.go:130] > c4d7018ad23a7       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              23 minutes ago       Exited              kindnet-cni               0                   a47b1fb60692c       kindnet-hhsxh
	I0318 13:11:00.800720    2404 command_runner.go:130] > 9335855aab63d       83f6cc407eed8                                                                                         23 minutes ago       Exited              kube-proxy                0                   60e9cd749c8f6       kube-proxy-mc5tv
	I0318 13:11:00.800720    2404 command_runner.go:130] > e4d42739ce0e9       e3db313c6dbc0                                                                                         23 minutes ago       Exited              kube-scheduler            0                   82710777e700c       kube-scheduler-multinode-894400
	I0318 13:11:00.800720    2404 command_runner.go:130] > 7aa5cf4ec378e       d058aa5ab969c                                                                                         23 minutes ago       Exited              kube-controller-manager   0                   5485f509825d9       kube-controller-manager-multinode-894400
	I0318 13:11:00.803007    2404 logs.go:123] Gathering logs for dmesg ...
	I0318 13:11:00.803175    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:11:00.828169    2404 command_runner.go:130] > [Mar18 13:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0318 13:11:00.828169    2404 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0318 13:11:00.828169    2404 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0318 13:11:00.828169    2404 command_runner.go:130] > [  +0.127438] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0318 13:11:00.828169    2404 command_runner.go:130] > [  +0.022457] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0318 13:11:00.828169    2404 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0318 13:11:00.828169    2404 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0318 13:11:00.828169    2404 command_runner.go:130] > [  +0.054196] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0318 13:11:00.828169    2404 command_runner.go:130] > [  +0.018424] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0318 13:11:00.828169    2404 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0318 13:11:00.828169    2404 command_runner.go:130] > [  +4.800453] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0318 13:11:00.828169    2404 command_runner.go:130] > [  +1.267636] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0318 13:11:00.829151    2404 command_runner.go:130] > [  +1.056053] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	I0318 13:11:00.829151    2404 command_runner.go:130] > [  +6.778211] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0318 13:11:00.829151    2404 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0318 13:11:00.829194    2404 command_runner.go:130] > [Mar18 13:09] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.160643] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [ +25.236158] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.093711] kauditd_printk_skb: 73 callbacks suppressed
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.488652] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.198307] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.213157] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +2.866452] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.191537] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.163904] systemd-fstab-generator[1255]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.280650] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.822319] systemd-fstab-generator[1393]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.094744] kauditd_printk_skb: 205 callbacks suppressed
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +3.177820] systemd-fstab-generator[1525]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +1.898187] kauditd_printk_skb: 64 callbacks suppressed
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +5.227041] kauditd_printk_skb: 10 callbacks suppressed
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +4.065141] systemd-fstab-generator[3089]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [Mar18 13:10] kauditd_printk_skb: 70 callbacks suppressed
	I0318 13:11:00.830941    2404 logs.go:123] Gathering logs for kube-apiserver [fc4430c7fa20] ...
	I0318 13:11:00.830941    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc4430c7fa20"
	I0318 13:11:00.857820    2404 command_runner.go:130] ! I0318 13:09:45.117348       1 options.go:220] external host was not specified, using 172.30.130.156
	I0318 13:11:00.858209    2404 command_runner.go:130] ! I0318 13:09:45.120803       1 server.go:148] Version: v1.28.4
	I0318 13:11:00.858209    2404 command_runner.go:130] ! I0318 13:09:45.120988       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:00.858209    2404 command_runner.go:130] ! I0318 13:09:45.770080       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0318 13:11:00.858296    2404 command_runner.go:130] ! I0318 13:09:45.795010       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0318 13:11:00.858376    2404 command_runner.go:130] ! I0318 13:09:45.795318       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0318 13:11:00.858376    2404 command_runner.go:130] ! I0318 13:09:45.795878       1 instance.go:298] Using reconciler: lease
	I0318 13:11:00.858376    2404 command_runner.go:130] ! I0318 13:09:46.836486       1 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0318 13:11:00.858451    2404 command_runner.go:130] ! W0318 13:09:46.836605       1 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858480    2404 command_runner.go:130] ! I0318 13:09:47.074638       1 handler.go:232] Adding GroupVersion  v1 to ResourceManager
	I0318 13:11:00.858510    2404 command_runner.go:130] ! I0318 13:09:47.074978       1 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0318 13:11:00.858573    2404 command_runner.go:130] ! I0318 13:09:47.452713       1 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
	I0318 13:11:00.858632    2404 command_runner.go:130] ! I0318 13:09:47.465860       1 handler.go:232] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0318 13:11:00.858657    2404 command_runner.go:130] ! W0318 13:09:47.465973       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858686    2404 command_runner.go:130] ! W0318 13:09:47.465981       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:00.858749    2404 command_runner.go:130] ! I0318 13:09:47.466706       1 handler.go:232] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0318 13:11:00.858749    2404 command_runner.go:130] ! W0318 13:09:47.466787       1 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858820    2404 command_runner.go:130] ! I0318 13:09:47.467862       1 handler.go:232] Adding GroupVersion autoscaling v2 to ResourceManager
	I0318 13:11:00.858820    2404 command_runner.go:130] ! I0318 13:09:47.468840       1 handler.go:232] Adding GroupVersion autoscaling v1 to ResourceManager
	I0318 13:11:00.858852    2404 command_runner.go:130] ! W0318 13:09:47.468926       1 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.468934       1 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.470928       1 handler.go:232] Adding GroupVersion batch v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.471074       1 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.472121       1 handler.go:232] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.472195       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.472202       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.472773       1 handler.go:232] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.472852       1 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.472898       1 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.473727       1 handler.go:232] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.476475       1 handler.go:232] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.476612       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.476620       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.477234       1 handler.go:232] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.477314       1 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.477321       1 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.478143       1 handler.go:232] Adding GroupVersion policy v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.478217       1 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.480195       1 handler.go:232] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.480271       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.480279       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.480731       1 handler.go:232] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.480812       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.480819       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.493837       1 handler.go:232] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.494098       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.494198       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.499689       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.506631       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.506664       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.506671       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.512288       1 handler.go:232] Adding GroupVersion apps v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.512371       1 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.512378       1 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.513443       1 handler.go:232] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.513547       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.513557       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.514339       1 handler.go:232] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.514435       1 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.859506    2404 command_runner.go:130] ! I0318 13:09:47.536002       1 handler.go:232] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0318 13:11:00.859506    2404 command_runner.go:130] ! W0318 13:09:47.536061       1 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.859506    2404 command_runner.go:130] ! I0318 13:09:48.221475       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:11:00.859506    2404 command_runner.go:130] ! I0318 13:09:48.221960       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:11:00.859506    2404 command_runner.go:130] ! I0318 13:09:48.222438       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0318 13:11:00.859506    2404 command_runner.go:130] ! I0318 13:09:48.222942       1 secure_serving.go:213] Serving securely on [::]:8443
	I0318 13:11:00.859506    2404 command_runner.go:130] ! I0318 13:09:48.223022       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:11:00.859670    2404 command_runner.go:130] ! I0318 13:09:48.223440       1 controller.go:78] Starting OpenAPI AggregationController
	I0318 13:11:00.859670    2404 command_runner.go:130] ! I0318 13:09:48.224862       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 13:11:00.859670    2404 command_runner.go:130] ! I0318 13:09:48.225271       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0318 13:11:00.859670    2404 command_runner.go:130] ! I0318 13:09:48.225417       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0318 13:11:00.859670    2404 command_runner.go:130] ! I0318 13:09:48.225564       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0318 13:11:00.859756    2404 command_runner.go:130] ! I0318 13:09:48.228940       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 13:11:00.859756    2404 command_runner.go:130] ! I0318 13:09:48.229462       1 controller.go:116] Starting legacy_token_tracking_controller
	I0318 13:11:00.859756    2404 command_runner.go:130] ! I0318 13:09:48.229644       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0318 13:11:00.859756    2404 command_runner.go:130] ! I0318 13:09:48.230522       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0318 13:11:00.859756    2404 command_runner.go:130] ! I0318 13:09:48.230832       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0318 13:11:00.859836    2404 command_runner.go:130] ! I0318 13:09:48.231097       1 aggregator.go:164] waiting for initial CRD sync...
	I0318 13:11:00.859836    2404 command_runner.go:130] ! I0318 13:09:48.231395       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0318 13:11:00.859836    2404 command_runner.go:130] ! I0318 13:09:48.231642       1 available_controller.go:423] Starting AvailableConditionController
	I0318 13:11:00.859902    2404 command_runner.go:130] ! I0318 13:09:48.231846       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0318 13:11:00.859902    2404 command_runner.go:130] ! I0318 13:09:48.232024       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0318 13:11:00.859959    2404 command_runner.go:130] ! I0318 13:09:48.232223       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0318 13:11:00.859982    2404 command_runner.go:130] ! I0318 13:09:48.232638       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0318 13:11:00.860068    2404 command_runner.go:130] ! I0318 13:09:48.233228       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:11:00.860068    2404 command_runner.go:130] ! I0318 13:09:48.233501       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:11:00.860131    2404 command_runner.go:130] ! I0318 13:09:48.242598       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0318 13:11:00.860154    2404 command_runner.go:130] ! I0318 13:09:48.242850       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0318 13:11:00.860181    2404 command_runner.go:130] ! I0318 13:09:48.243085       1 controller.go:134] Starting OpenAPI controller
	I0318 13:11:00.860237    2404 command_runner.go:130] ! I0318 13:09:48.243289       1 controller.go:85] Starting OpenAPI V3 controller
	I0318 13:11:00.860237    2404 command_runner.go:130] ! I0318 13:09:48.243558       1 naming_controller.go:291] Starting NamingConditionController
	I0318 13:11:00.860315    2404 command_runner.go:130] ! I0318 13:09:48.243852       1 establishing_controller.go:76] Starting EstablishingController
	I0318 13:11:00.860341    2404 command_runner.go:130] ! I0318 13:09:48.244899       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0318 13:11:00.860398    2404 command_runner.go:130] ! I0318 13:09:48.245178       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 13:11:00.860398    2404 command_runner.go:130] ! I0318 13:09:48.245796       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 13:11:00.860398    2404 command_runner.go:130] ! I0318 13:09:48.231958       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0318 13:11:00.860398    2404 command_runner.go:130] ! I0318 13:09:48.403749       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.426183       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.426213       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.426382       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.432175       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.433073       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.433297       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.444484       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.444708       1 aggregator.go:166] initial CRD sync complete...
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.444961       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.445263       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.446443       1 cache.go:39] Caches are synced for autoregister controller
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.471536       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:49.257477       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 13:11:00.860502    2404 command_runner.go:130] ! W0318 13:09:49.806994       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.30.130.156]
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:49.809655       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:49.821460       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:51.622752       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:51.799195       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:51.812022       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:51.930541       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:51.942099       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 13:11:00.868261    2404 logs.go:123] Gathering logs for kube-scheduler [e4d42739ce0e] ...
	I0318 13:11:00.868261    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4d42739ce0e"
	I0318 13:11:00.894583    2404 command_runner.go:130] ! I0318 12:47:23.427784       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:11:00.894583    2404 command_runner.go:130] ! W0318 12:47:24.381993       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 13:11:00.895787    2404 command_runner.go:130] ! W0318 12:47:24.382186       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:00.895866    2404 command_runner.go:130] ! W0318 12:47:24.382237       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 13:11:00.895905    2404 command_runner.go:130] ! W0318 12:47:24.382251       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:11:00.895905    2404 command_runner.go:130] ! I0318 12:47:24.461225       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 13:11:00.895905    2404 command_runner.go:130] ! I0318 12:47:24.461511       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:00.895954    2404 command_runner.go:130] ! I0318 12:47:24.465946       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 13:11:00.895994    2404 command_runner.go:130] ! I0318 12:47:24.466246       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:11:00.896313    2404 command_runner.go:130] ! I0318 12:47:24.466280       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:11:00.896536    2404 command_runner.go:130] ! I0318 12:47:24.473793       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:11:00.897482    2404 command_runner.go:130] ! W0318 12:47:24.487135       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:00.897511    2404 command_runner.go:130] ! E0318 12:47:24.487240       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:00.897511    2404 command_runner.go:130] ! W0318 12:47:24.519325       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:11:00.897511    2404 command_runner.go:130] ! E0318 12:47:24.519853       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:11:00.897511    2404 command_runner.go:130] ! W0318 12:47:24.520361       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:11:00.897511    2404 command_runner.go:130] ! E0318 12:47:24.520484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:11:00.898059    2404 command_runner.go:130] ! W0318 12:47:24.520711       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:11:00.898059    2404 command_runner.go:130] ! E0318 12:47:24.522735       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! W0318 12:47:24.523312       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! E0318 12:47:24.523462       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! W0318 12:47:24.523710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! E0318 12:47:24.523900       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! W0318 12:47:24.524226       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! E0318 12:47:24.524422       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! W0318 12:47:24.524710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! E0318 12:47:24.525125       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! W0318 12:47:24.525523       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! E0318 12:47:24.525746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! W0318 12:47:24.526240       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! E0318 12:47:24.526443       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! W0318 12:47:24.526703       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 13:11:00.898685    2404 command_runner.go:130] ! E0318 12:47:24.526852       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 13:11:00.898797    2404 command_runner.go:130] ! W0318 12:47:24.527382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:11:00.898919    2404 command_runner.go:130] ! E0318 12:47:24.527873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:11:00.898919    2404 command_runner.go:130] ! W0318 12:47:24.528117       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:11:00.898919    2404 command_runner.go:130] ! E0318 12:47:24.528748       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:11:00.898978    2404 command_runner.go:130] ! W0318 12:47:24.529179       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.899120    2404 command_runner.go:130] ! E0318 12:47:24.529832       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.899120    2404 command_runner.go:130] ! W0318 12:47:24.530406       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.899187    2404 command_runner.go:130] ! E0318 12:47:24.532696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.899242    2404 command_runner.go:130] ! W0318 12:47:25.371082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.899262    2404 command_runner.go:130] ! E0318 12:47:25.371130       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.899262    2404 command_runner.go:130] ! W0318 12:47:25.374605       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:11:00.899345    2404 command_runner.go:130] ! E0318 12:47:25.374678       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:11:00.899426    2404 command_runner.go:130] ! W0318 12:47:25.400777       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:11:00.899469    2404 command_runner.go:130] ! E0318 12:47:25.400820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! W0318 12:47:25.434442       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:00.899519    2404 command_runner.go:130] ! E0318 12:47:25.434526       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:00.899519    2404 command_runner.go:130] ! W0318 12:47:25.456878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! E0318 12:47:25.457121       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! W0318 12:47:25.744652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! E0318 12:47:25.744733       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! W0318 12:47:25.777073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! E0318 12:47:25.777145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! W0318 12:47:25.850949       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! E0318 12:47:25.850985       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! W0318 12:47:25.876908       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! E0318 12:47:25.877170       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! W0318 12:47:25.892072       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! E0318 12:47:25.892099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! W0318 12:47:25.988864       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! E0318 12:47:25.988912       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:11:00.900051    2404 command_runner.go:130] ! W0318 12:47:26.044749       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:11:00.900051    2404 command_runner.go:130] ! E0318 12:47:26.044834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:11:00.900127    2404 command_runner.go:130] ! W0318 12:47:26.067659       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.900127    2404 command_runner.go:130] ! E0318 12:47:26.068250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.900127    2404 command_runner.go:130] ! I0318 12:47:28.178584       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:11:00.900127    2404 command_runner.go:130] ! I0318 13:07:24.107367       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0318 13:11:00.900215    2404 command_runner.go:130] ! I0318 13:07:24.107975       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0318 13:11:00.900239    2404 command_runner.go:130] ! E0318 13:07:24.108193       1 run.go:74] "command failed" err="finished without leader elect"
	I0318 13:11:00.910075    2404 logs.go:123] Gathering logs for kube-proxy [163ccabc3882] ...
	I0318 13:11:00.910075    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 163ccabc3882"
	I0318 13:11:00.938523    2404 command_runner.go:130] ! I0318 13:09:50.786718       1 server_others.go:69] "Using iptables proxy"
	I0318 13:11:00.938570    2404 command_runner.go:130] ! I0318 13:09:50.833991       1 node.go:141] Successfully retrieved node IP: 172.30.130.156
	I0318 13:11:00.938570    2404 command_runner.go:130] ! I0318 13:09:50.913665       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:11:00.938570    2404 command_runner.go:130] ! I0318 13:09:50.913704       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:11:00.938570    2404 command_runner.go:130] ! I0318 13:09:50.924640       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:11:00.938747    2404 command_runner.go:130] ! I0318 13:09:50.925588       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:11:00.939527    2404 command_runner.go:130] ! I0318 13:09:50.926722       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:11:00.939527    2404 command_runner.go:130] ! I0318 13:09:50.926981       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:00.939588    2404 command_runner.go:130] ! I0318 13:09:50.938764       1 config.go:188] "Starting service config controller"
	I0318 13:11:00.939588    2404 command_runner.go:130] ! I0318 13:09:50.949206       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:11:00.939588    2404 command_runner.go:130] ! I0318 13:09:50.949220       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:11:00.939647    2404 command_runner.go:130] ! I0318 13:09:50.953299       1 config.go:315] "Starting node config controller"
	I0318 13:11:00.939647    2404 command_runner.go:130] ! I0318 13:09:50.979020       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:11:00.939647    2404 command_runner.go:130] ! I0318 13:09:50.990249       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:11:00.939647    2404 command_runner.go:130] ! I0318 13:09:50.958488       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:11:00.939707    2404 command_runner.go:130] ! I0318 13:09:50.996356       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:11:00.939749    2404 command_runner.go:130] ! I0318 13:09:51.051947       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:11:00.942342    2404 logs.go:123] Gathering logs for kube-controller-manager [4ad6784a187d] ...
	I0318 13:11:00.942421    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ad6784a187d"
	I0318 13:11:00.967283    2404 command_runner.go:130] ! I0318 13:09:46.053304       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:11:00.967591    2404 command_runner.go:130] ! I0318 13:09:46.598188       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 13:11:00.967591    2404 command_runner.go:130] ! I0318 13:09:46.598275       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:00.967591    2404 command_runner.go:130] ! I0318 13:09:46.600550       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:11:00.967591    2404 command_runner.go:130] ! I0318 13:09:46.600856       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:11:00.967758    2404 command_runner.go:130] ! I0318 13:09:46.601228       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 13:11:00.967758    2404 command_runner.go:130] ! I0318 13:09:46.601416       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:11:00.967758    2404 command_runner.go:130] ! I0318 13:09:50.365580       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 13:11:00.967758    2404 command_runner.go:130] ! I0318 13:09:50.380467       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 13:11:00.967860    2404 command_runner.go:130] ! I0318 13:09:50.380609       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 13:11:00.967860    2404 command_runner.go:130] ! I0318 13:09:50.380622       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 13:11:00.967860    2404 command_runner.go:130] ! I0318 13:09:50.396606       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 13:11:00.967976    2404 command_runner.go:130] ! I0318 13:09:50.396766       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 13:11:00.967976    2404 command_runner.go:130] ! I0318 13:09:50.466364       1 shared_informer.go:318] Caches are synced for tokens
	I0318 13:11:00.968058    2404 command_runner.go:130] ! I0318 13:10:00.425018       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 13:11:00.968058    2404 command_runner.go:130] ! I0318 13:10:00.425185       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 13:11:00.968058    2404 command_runner.go:130] ! I0318 13:10:00.425608       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 13:11:00.968136    2404 command_runner.go:130] ! I0318 13:10:00.425649       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 13:11:00.968136    2404 command_runner.go:130] ! I0318 13:10:00.429368       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 13:11:00.968214    2404 command_runner.go:130] ! I0318 13:10:00.429570       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 13:11:00.968260    2404 command_runner.go:130] ! I0318 13:10:00.429653       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 13:11:00.968313    2404 command_runner.go:130] ! I0318 13:10:00.432615       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 13:11:00.968313    2404 command_runner.go:130] ! I0318 13:10:00.435149       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 13:11:00.968360    2404 command_runner.go:130] ! I0318 13:10:00.435476       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 13:11:00.968400    2404 command_runner.go:130] ! I0318 13:10:00.435957       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 13:11:00.968400    2404 command_runner.go:130] ! I0318 13:10:00.436324       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 13:11:00.968449    2404 command_runner.go:130] ! I0318 13:10:00.436534       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 13:11:00.968449    2404 command_runner.go:130] ! E0318 13:10:00.440226       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 13:11:00.968449    2404 command_runner.go:130] ! I0318 13:10:00.440586       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 13:11:00.968539    2404 command_runner.go:130] ! E0318 13:10:00.443615       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 13:11:00.968539    2404 command_runner.go:130] ! I0318 13:10:00.443912       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 13:11:00.968618    2404 command_runner.go:130] ! I0318 13:10:00.446716       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 13:11:00.968695    2404 command_runner.go:130] ! I0318 13:10:00.446764       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 13:11:00.968695    2404 command_runner.go:130] ! I0318 13:10:00.447388       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 13:11:00.968695    2404 command_runner.go:130] ! I0318 13:10:00.450136       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 13:11:00.968782    2404 command_runner.go:130] ! I0318 13:10:00.450514       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 13:11:00.968782    2404 command_runner.go:130] ! I0318 13:10:00.450816       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 13:11:00.968861    2404 command_runner.go:130] ! I0318 13:10:00.482128       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 13:11:00.968938    2404 command_runner.go:130] ! I0318 13:10:00.482431       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 13:11:00.968938    2404 command_runner.go:130] ! I0318 13:10:00.482564       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 13:11:00.969017    2404 command_runner.go:130] ! I0318 13:10:00.485138       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 13:11:00.969017    2404 command_runner.go:130] ! I0318 13:10:00.485477       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 13:11:00.969017    2404 command_runner.go:130] ! I0318 13:10:00.485637       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 13:11:00.969096    2404 command_runner.go:130] ! I0318 13:10:00.485765       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 13:11:00.969096    2404 command_runner.go:130] ! I0318 13:10:00.487736       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 13:11:00.969173    2404 command_runner.go:130] ! I0318 13:10:00.488836       1 gc_controller.go:101] "Starting GC controller"
	I0318 13:11:00.969173    2404 command_runner.go:130] ! I0318 13:10:00.489018       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 13:11:00.969173    2404 command_runner.go:130] ! I0318 13:10:00.490586       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 13:11:00.969251    2404 command_runner.go:130] ! I0318 13:10:00.491164       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 13:11:00.969251    2404 command_runner.go:130] ! I0318 13:10:00.491311       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 13:11:00.969329    2404 command_runner.go:130] ! I0318 13:10:00.494562       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 13:11:00.969329    2404 command_runner.go:130] ! I0318 13:10:00.495002       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 13:11:00.969329    2404 command_runner.go:130] ! I0318 13:10:00.495133       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 13:11:00.969406    2404 command_runner.go:130] ! I0318 13:10:00.497694       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 13:11:00.969406    2404 command_runner.go:130] ! I0318 13:10:00.497986       1 expand_controller.go:328] "Starting expand controller"
	I0318 13:11:00.969497    2404 command_runner.go:130] ! I0318 13:10:00.498025       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 13:11:00.969527    2404 command_runner.go:130] ! I0318 13:10:00.500933       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 13:11:00.969527    2404 command_runner.go:130] ! I0318 13:10:00.502880       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 13:11:00.969605    2404 command_runner.go:130] ! I0318 13:10:00.503102       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 13:11:00.969605    2404 command_runner.go:130] ! I0318 13:10:00.506760       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 13:11:00.969683    2404 command_runner.go:130] ! I0318 13:10:00.507227       1 disruption.go:433] "Sending events to api server."
	I0318 13:11:00.969683    2404 command_runner.go:130] ! I0318 13:10:00.507302       1 disruption.go:444] "Starting disruption controller"
	I0318 13:11:00.969683    2404 command_runner.go:130] ! I0318 13:10:00.507366       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 13:11:00.969761    2404 command_runner.go:130] ! I0318 13:10:00.509815       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 13:11:00.969761    2404 command_runner.go:130] ! I0318 13:10:00.510402       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 13:11:00.969838    2404 command_runner.go:130] ! I0318 13:10:00.510478       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 13:11:00.969838    2404 command_runner.go:130] ! I0318 13:10:00.514582       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 13:11:00.969915    2404 command_runner.go:130] ! I0318 13:10:00.514842       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 13:11:00.969993    2404 command_runner.go:130] ! I0318 13:10:00.514832       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:00.969993    2404 command_runner.go:130] ! I0318 13:10:00.517859       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 13:11:00.970072    2404 command_runner.go:130] ! I0318 13:10:00.518134       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 13:11:00.970072    2404 command_runner.go:130] ! I0318 13:10:00.518434       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:00.970149    2404 command_runner.go:130] ! I0318 13:10:00.519400       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 13:11:00.970149    2404 command_runner.go:130] ! I0318 13:10:00.519576       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 13:11:00.970227    2404 command_runner.go:130] ! I0318 13:10:00.519729       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:00.970338    2404 command_runner.go:130] ! I0318 13:10:00.519883       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 13:11:00.970338    2404 command_runner.go:130] ! I0318 13:10:00.519902       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 13:11:00.970338    2404 command_runner.go:130] ! I0318 13:10:00.520909       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 13:11:00.970422    2404 command_runner.go:130] ! I0318 13:10:00.519914       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:00.970422    2404 command_runner.go:130] ! I0318 13:10:00.524690       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 13:11:00.970422    2404 command_runner.go:130] ! I0318 13:10:00.524967       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 13:11:00.970501    2404 command_runner.go:130] ! I0318 13:10:00.525267       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 13:11:00.970521    2404 command_runner.go:130] ! I0318 13:10:00.528248       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 13:11:00.970521    2404 command_runner.go:130] ! I0318 13:10:00.528509       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 13:11:00.970521    2404 command_runner.go:130] ! I0318 13:10:00.528721       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 13:11:00.970574    2404 command_runner.go:130] ! I0318 13:10:00.532254       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 13:11:00.970592    2404 command_runner.go:130] ! I0318 13:10:00.532687       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 13:11:00.970592    2404 command_runner.go:130] ! I0318 13:10:00.532717       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 13:11:00.970592    2404 command_runner.go:130] ! I0318 13:10:00.544900       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 13:11:00.970647    2404 command_runner.go:130] ! I0318 13:10:00.545135       1 horizontal.go:200] "Starting HPA controller"
	I0318 13:11:00.970647    2404 command_runner.go:130] ! I0318 13:10:00.545195       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 13:11:00.970671    2404 command_runner.go:130] ! I0318 13:10:00.547641       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 13:11:00.970671    2404 command_runner.go:130] ! I0318 13:10:00.548078       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.550784       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.551368       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.551557       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.551931       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.551452       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.553190       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.553856       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.554970       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.555558       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.555718       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.558545       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.558805       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.558956       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 13:11:00.970713    2404 command_runner.go:130] ! W0318 13:10:00.765746       1 shared_informer.go:593] resyncPeriod 13h51m37.636447347s is smaller than resyncCheckPeriod 22h45m34.293132222s and the informer has already started. Changing it to 22h45m34.293132222s
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.765905       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.766015       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.766141       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.766231       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.767946       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.768138       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.768175       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.768271       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.768411       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.768529       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.768565       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.768633       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! W0318 13:10:00.768841       1 shared_informer.go:593] resyncPeriod 17h39m7.901162259s is smaller than resyncCheckPeriod 22h45m34.293132222s and the informer has already started. Changing it to 22h45m34.293132222s
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.769020       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.769077       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.769115       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.769206       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.769280       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 13:11:00.971235    2404 command_runner.go:130] ! I0318 13:10:00.769427       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 13:11:00.971235    2404 command_runner.go:130] ! I0318 13:10:00.769509       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 13:11:00.971235    2404 command_runner.go:130] ! I0318 13:10:00.769668       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 13:11:00.971235    2404 command_runner.go:130] ! I0318 13:10:00.769816       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 13:11:00.971235    2404 command_runner.go:130] ! I0318 13:10:00.769832       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:11:00.971235    2404 command_runner.go:130] ! I0318 13:10:00.769855       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 13:11:00.971235    2404 command_runner.go:130] ! I0318 13:10:00.769714       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 13:11:00.971235    2404 command_runner.go:130] ! I0318 13:10:00.906184       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 13:11:00.971382    2404 command_runner.go:130] ! I0318 13:10:00.906404       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 13:11:00.971382    2404 command_runner.go:130] ! I0318 13:10:00.906702       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:11:00.971382    2404 command_runner.go:130] ! I0318 13:10:00.906740       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 13:11:00.971382    2404 command_runner.go:130] ! I0318 13:10:00.956245       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 13:11:00.971382    2404 command_runner.go:130] ! I0318 13:10:00.956457       1 job_controller.go:226] "Starting job controller"
	I0318 13:11:00.971461    2404 command_runner.go:130] ! I0318 13:10:00.956765       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 13:11:00.971485    2404 command_runner.go:130] ! I0318 13:10:01.056144       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 13:11:00.971512    2404 command_runner.go:130] ! I0318 13:10:01.056251       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 13:11:00.971607    2404 command_runner.go:130] ! I0318 13:10:01.056576       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 13:11:00.971642    2404 command_runner.go:130] ! I0318 13:10:01.156303       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 13:11:00.971642    2404 command_runner.go:130] ! I0318 13:10:01.156762       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 13:11:00.971642    2404 command_runner.go:130] ! I0318 13:10:01.156852       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 13:11:00.971718    2404 command_runner.go:130] ! I0318 13:10:01.205282       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 13:11:00.971718    2404 command_runner.go:130] ! I0318 13:10:01.205353       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 13:11:00.971747    2404 command_runner.go:130] ! I0318 13:10:01.205368       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 13:11:00.971747    2404 command_runner.go:130] ! I0318 13:10:01.256513       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 13:11:00.971747    2404 command_runner.go:130] ! I0318 13:10:01.256828       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 13:11:00.971747    2404 command_runner.go:130] ! I0318 13:10:01.256867       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 13:11:00.971747    2404 command_runner.go:130] ! I0318 13:10:01.306581       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 13:11:00.971817    2404 command_runner.go:130] ! I0318 13:10:01.306969       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.307156       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.317298       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.349149       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.369957       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.371629       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400\" does not exist"
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.371840       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m02\" does not exist"
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.372556       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.372879       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m03\" does not exist"
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.373004       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.380690       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.383858       1 shared_informer.go:318] Caches are synced for namespace
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.390400       1 shared_informer.go:318] Caches are synced for GC
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.391669       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.398208       1 shared_informer.go:318] Caches are synced for expand
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.403691       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.406154       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.407387       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.407463       1 shared_informer.go:318] Caches are synced for disruption
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.411470       1 shared_informer.go:318] Caches are synced for deployment
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.415591       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.419985       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.420028       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.422567       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.426386       1 shared_informer.go:318] Caches are synced for node
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.426502       1 range_allocator.go:174] "Sending events to api server"
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.426637       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.426705       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.426892       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.426546       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.429986       1 shared_informer.go:318] Caches are synced for service account
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.430014       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.433506       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.437710       1 shared_informer.go:318] Caches are synced for TTL
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.445429       1 shared_informer.go:318] Caches are synced for HPA
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.448863       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 13:11:00.972427    2404 command_runner.go:130] ! I0318 13:10:01.451599       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 13:11:00.972427    2404 command_runner.go:130] ! I0318 13:10:01.454157       1 shared_informer.go:318] Caches are synced for taint
	I0318 13:11:00.972427    2404 command_runner.go:130] ! I0318 13:10:01.454304       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 13:11:00.972427    2404 command_runner.go:130] ! I0318 13:10:01.454496       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 13:11:00.972427    2404 command_runner.go:130] ! I0318 13:10:01.454532       1 taint_manager.go:210] "Sending events to api server"
	I0318 13:11:00.972427    2404 command_runner.go:130] ! I0318 13:10:01.455374       1 event.go:307] "Event occurred" object="multinode-894400" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400 event: Registered Node multinode-894400 in Controller"
	I0318 13:11:00.972427    2404 command_runner.go:130] ! I0318 13:10:01.455390       1 event.go:307] "Event occurred" object="multinode-894400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller"
	I0318 13:11:00.972427    2404 command_runner.go:130] ! I0318 13:10:01.455400       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller"
	I0318 13:11:00.972576    2404 command_runner.go:130] ! I0318 13:10:01.456700       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 13:11:00.972576    2404 command_runner.go:130] ! I0318 13:10:01.456719       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 13:11:00.972576    2404 command_runner.go:130] ! I0318 13:10:01.457835       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 13:11:00.972576    2404 command_runner.go:130] ! I0318 13:10:01.457861       1 shared_informer.go:318] Caches are synced for job
	I0318 13:11:00.972576    2404 command_runner.go:130] ! I0318 13:10:01.458132       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 13:11:00.972576    2404 command_runner.go:130] ! I0318 13:10:01.499926       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 13:11:00.972701    2404 command_runner.go:130] ! I0318 13:10:01.502022       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400"
	I0318 13:11:00.972701    2404 command_runner.go:130] ! I0318 13:10:01.502582       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m02"
	I0318 13:11:00.972701    2404 command_runner.go:130] ! I0318 13:10:01.502665       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m03"
	I0318 13:11:00.972701    2404 command_runner.go:130] ! I0318 13:10:01.505439       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0318 13:11:00.972701    2404 command_runner.go:130] ! I0318 13:10:01.518153       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:11:00.972782    2404 command_runner.go:130] ! I0318 13:10:01.524442       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="116.887006ms"
	I0318 13:11:00.972782    2404 command_runner.go:130] ! I0318 13:10:01.526447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="62.302µs"
	I0318 13:11:00.972782    2404 command_runner.go:130] ! I0318 13:10:01.532190       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="124.57225ms"
	I0318 13:11:00.972782    2404 command_runner.go:130] ! I0318 13:10:01.532535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.501µs"
	I0318 13:11:00.972858    2404 command_runner.go:130] ! I0318 13:10:01.536870       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 13:11:00.972858    2404 command_runner.go:130] ! I0318 13:10:01.559571       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 13:11:00.972858    2404 command_runner.go:130] ! I0318 13:10:01.576497       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:11:00.972858    2404 command_runner.go:130] ! I0318 13:10:01.970420       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:11:00.972923    2404 command_runner.go:130] ! I0318 13:10:02.008120       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:11:00.972923    2404 command_runner.go:130] ! I0318 13:10:02.008146       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 13:11:00.972923    2404 command_runner.go:130] ! I0318 13:10:23.798396       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:00.972979    2404 command_runner.go:130] ! I0318 13:10:26.538088       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-456tm" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-456tm"
	I0318 13:11:00.973018    2404 command_runner.go:130] ! I0318 13:10:26.538124       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-c2997" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-c2997"
	I0318 13:11:00.973018    2404 command_runner.go:130] ! I0318 13:10:26.538134       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0318 13:11:00.973018    2404 command_runner.go:130] ! I0318 13:10:41.556645       1 event.go:307] "Event occurred" object="multinode-894400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-894400-m02 status is now: NodeNotReady"
	I0318 13:11:00.973018    2404 command_runner.go:130] ! I0318 13:10:41.569274       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8btgf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:00.973123    2404 command_runner.go:130] ! I0318 13:10:41.592766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="22.447202ms"
	I0318 13:11:00.973123    2404 command_runner.go:130] ! I0318 13:10:41.593427       1 event.go:307] "Event occurred" object="kube-system/kindnet-k5lpg" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:00.973123    2404 command_runner.go:130] ! I0318 13:10:41.595199       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="39.101µs"
	I0318 13:11:00.973123    2404 command_runner.go:130] ! I0318 13:10:41.617007       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-8bdmn" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:00.973123    2404 command_runner.go:130] ! I0318 13:10:54.102255       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.438427ms"
	I0318 13:11:00.973227    2404 command_runner.go:130] ! I0318 13:10:54.102713       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="266.302µs"
	I0318 13:11:00.973227    2404 command_runner.go:130] ! I0318 13:10:54.115993       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="210.701µs"
	I0318 13:11:00.973227    2404 command_runner.go:130] ! I0318 13:10:55.131550       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.807636ms"
	I0318 13:11:00.973227    2404 command_runner.go:130] ! I0318 13:10:55.131763       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.301µs"
	I0318 13:11:00.985831    2404 logs.go:123] Gathering logs for kubelet ...
	I0318 13:11:00.985831    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:11:01.012456    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 13:11:01.012456    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: I0318 13:09:39.912330    1399 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 13:11:01.012456    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: I0318 13:09:39.913472    1399 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:01.012456    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: I0318 13:09:39.914280    1399 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 13:11:01.012568    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: E0318 13:09:39.914469    1399 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 13:11:01.012568    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:11:01.012568    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 13:11:01.012747    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0318 13:11:01.012747    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 13:11:01.012747    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 13:11:01.012813    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: I0318 13:09:40.661100    1455 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 13:11:01.012813    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: I0318 13:09:40.661586    1455 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:01.012813    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: I0318 13:09:40.662255    1455 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 13:11:01.012882    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: E0318 13:09:40.662383    1455 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 13:11:01.012882    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:11:01.012957    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 13:11:01.012957    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 13:11:01.012957    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 13:11:01.013029    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.774439    1532 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 13:11:01.013029    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.775083    1532 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:01.013029    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.775946    1532 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 13:11:01.013104    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.785429    1532 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0318 13:11:01.013104    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.801370    1532 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:11:01.013104    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.849790    1532 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0318 13:11:01.013104    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851652    1532 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0318 13:11:01.013215    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851916    1532 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0318 13:11:01.013215    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851957    1532 topology_manager.go:138] "Creating topology manager with none policy"
	I0318 13:11:01.013276    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851967    1532 container_manager_linux.go:301] "Creating device plugin manager"
	I0318 13:11:01.013276    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.853347    1532 state_mem.go:36] "Initialized new in-memory state store"
	I0318 13:11:01.013276    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.855331    1532 kubelet.go:393] "Attempting to sync node with API server"
	I0318 13:11:01.013328    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.855456    1532 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0318 13:11:01.013328    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.856520    1532 kubelet.go:309] "Adding apiserver pod source"
	I0318 13:11:01.013328    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.856554    1532 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0318 13:11:01.013387    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.859153    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.013387    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.859647    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.013439    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.860993    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.013497    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.861168    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.013497    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.872782    1532 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="docker" version="25.0.4" apiVersion="v1"
	I0318 13:11:01.013547    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.875640    1532 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0318 13:11:01.013547    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.876823    1532 server.go:1232] "Started kubelet"
	I0318 13:11:01.013547    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.878282    1532 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0318 13:11:01.013619    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.879215    1532 server.go:462] "Adding debug handlers to kubelet server"
	I0318 13:11:01.013619    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.882881    1532 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
	I0318 13:11:01.013668    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.883660    1532 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0318 13:11:01.013668    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.878365    1532 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0318 13:11:01.013774    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.886734    1532 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-894400.17bddddee5b23bca", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-894400", UID:"multinode-894400", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-894400"}, FirstTimestamp:time.Date(2024, ti
me.March, 18, 13, 9, 42, 876797898, time.Local), LastTimestamp:time.Date(2024, time.March, 18, 13, 9, 42, 876797898, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-894400"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.30.130.156:8443: connect: connection refused'(may retry after sleeping)
	I0318 13:11:01.013774    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.886969    1532 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0318 13:11:01.013829    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.887086    1532 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0318 13:11:01.013829    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.907405    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.013878    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.907883    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.013878    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.910785    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="200ms"
	I0318 13:11:01.014059    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.959085    1532 reconciler_new.go:29] "Reconciler: start to sync state"
	I0318 13:11:01.014109    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.981490    1532 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0318 13:11:01.014165    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.981531    1532 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0318 13:11:01.014165    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.981561    1532 state_mem.go:36] "Initialized new in-memory state store"
	I0318 13:11:01.014165    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.982644    1532 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0318 13:11:01.014235    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.982700    1532 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0318 13:11:01.014235    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.982728    1532 policy_none.go:49] "None policy: Start"
	I0318 13:11:01.014235    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.989705    1532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0318 13:11:01.014235    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.002857    1532 memory_manager.go:169] "Starting memorymanager" policy="None"
	I0318 13:11:01.014306    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.003620    1532 state_mem.go:35] "Initializing new in-memory state store"
	I0318 13:11:01.014306    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.004623    1532 state_mem.go:75] "Updated machine memory state"
	I0318 13:11:01.014306    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.006120    1532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0318 13:11:01.014358    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.007397    1532 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0318 13:11:01.014358    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.008604    1532 kubelet.go:2303] "Starting kubelet main sync loop"
	I0318 13:11:01.014358    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.008971    1532 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0318 13:11:01.014420    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.016115    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:11:01.014420    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.018685    1532 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 13:11:01.014420    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 13:11:01.014483    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 13:11:01.014483    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 13:11:01.014483    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 13:11:01.014544    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.021241    1532 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0318 13:11:01.014544    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: W0318 13:09:43.022840    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.014607    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.022916    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.014607    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.022979    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:11:01.014674    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.023116    1532 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0318 13:11:01.014674    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.041923    1532 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-894400\" not found"
	I0318 13:11:01.014727    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.112352    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="400ms"
	I0318 13:11:01.014727    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.113553    1532 topology_manager.go:215] "Topology Admit Handler" podUID="1c745e9b917877b1ff3c90ed02e9a79a" podNamespace="kube-system" podName="kube-scheduler-multinode-894400"
	I0318 13:11:01.014787    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.126661    1532 topology_manager.go:215] "Topology Admit Handler" podUID="6096c2227c4230453f65f86ebdcd0d95" podNamespace="kube-system" podName="kube-apiserver-multinode-894400"
	I0318 13:11:01.014856    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.137838    1532 topology_manager.go:215] "Topology Admit Handler" podUID="d340aced56ba169ecac1e3ac58ad57fe" podNamespace="kube-system" podName="kube-controller-manager-multinode-894400"
	I0318 13:11:01.014856    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.154701    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5485f509825d9272a84959cbcfbb4f0187be886867ba7bac76fa00a35e34bdd1"
	I0318 13:11:01.014930    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.154826    1532 topology_manager.go:215] "Topology Admit Handler" podUID="743a549b698f93b8586a236f83c90556" podNamespace="kube-system" podName="etcd-multinode-894400"
	I0318 13:11:01.014930    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171660    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d001e299e996b74f090c73f4e98605ef0d323a96826d23424e724ca8e7fe466a"
	I0318 13:11:01.014982    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171681    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60e9cd749c8f67d0bc24596b26b654cf85a82055f89e14c4a14a4e9342f5fc9f"
	I0318 13:11:01.014982    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171704    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acffce2e73842c3e46177a77ddd5a8d308b51daf062cac439cc487cc863c4226"
	I0318 13:11:01.015041    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171714    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="265b39e386cfa82eef9715aba314fbf8a9292776816cf86ed4099004698cb320"
	I0318 13:11:01.015041    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171723    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="220884cbf1f5b852987c5a28277a4914502f0623413c284054afa92791494c50"
	I0318 13:11:01.015095    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171731    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a47b1fb60692cee0c4ed89ecc511fa046c0873051f7daf026f1c5c6a3dfd7352"
	I0318 13:11:01.015095    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.172283    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82710777e700c4f2e71da911834959efc480f8ba2a526049f0f6c238947c5146"
	I0318 13:11:01.015154    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.186382    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a23c1189be7c39f22c8a59ba41b198519894c9783ecad8dcda79fb8a76f05254"
	I0318 13:11:01.015154    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.231617    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:11:01.015207    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.233479    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:11:01.015207    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.267903    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c745e9b917877b1ff3c90ed02e9a79a-kubeconfig\") pod \"kube-scheduler-multinode-894400\" (UID: \"1c745e9b917877b1ff3c90ed02e9a79a\") " pod="kube-system/kube-scheduler-multinode-894400"
	I0318 13:11:01.015280    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268106    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6096c2227c4230453f65f86ebdcd0d95-ca-certs\") pod \"kube-apiserver-multinode-894400\" (UID: \"6096c2227c4230453f65f86ebdcd0d95\") " pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:11:01.015329    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268214    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-ca-certs\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:11:01.015386    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268242    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-kubeconfig\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:11:01.015444    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268269    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:11:01.015444    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268295    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/743a549b698f93b8586a236f83c90556-etcd-certs\") pod \"etcd-multinode-894400\" (UID: \"743a549b698f93b8586a236f83c90556\") " pod="kube-system/etcd-multinode-894400"
	I0318 13:11:01.015500    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268330    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/743a549b698f93b8586a236f83c90556-etcd-data\") pod \"etcd-multinode-894400\" (UID: \"743a549b698f93b8586a236f83c90556\") " pod="kube-system/etcd-multinode-894400"
	I0318 13:11:01.015551    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268361    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6096c2227c4230453f65f86ebdcd0d95-k8s-certs\") pod \"kube-apiserver-multinode-894400\" (UID: \"6096c2227c4230453f65f86ebdcd0d95\") " pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:11:01.015609    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268423    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6096c2227c4230453f65f86ebdcd0d95-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-894400\" (UID: \"6096c2227c4230453f65f86ebdcd0d95\") " pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:11:01.015609    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268445    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-flexvolume-dir\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:11:01.015668    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268537    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-k8s-certs\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:11:01.015726    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.513563    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="800ms"
	I0318 13:11:01.015726    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.656950    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:11:01.015777    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.658595    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:11:01.015777    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: W0318 13:09:43.917173    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.015834    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.917511    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.015834    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: W0318 13:09:44.022640    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.015892    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.022973    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.015947    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: W0318 13:09:44.114653    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.015998    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.114784    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.015998    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: I0318 13:09:44.229821    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="354f3c44a34fcbd9ca7826066f2b4c40b3a6551aa632389815c5d24e85e0472b"
	I0318 13:11:01.016054    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.315351    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="1.6s"
	I0318 13:11:01.016104    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: W0318 13:09:44.368370    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.016104    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.368575    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.016161    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: I0318 13:09:44.495686    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:11:01.016161    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.496847    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:11:01.016211    2404 command_runner.go:130] > Mar 18 13:09:46 multinode-894400 kubelet[1532]: I0318 13:09:46.112867    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:11:01.016211    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.454296    1532 kubelet_node_status.go:108] "Node was previously registered" node="multinode-894400"
	I0318 13:11:01.016211    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.454504    1532 kubelet_node_status.go:73] "Successfully registered node" node="multinode-894400"
	I0318 13:11:01.016267    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.466215    1532 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0318 13:11:01.016267    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.467399    1532 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0318 13:11:01.016321    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.481710    1532 setters.go:552] "Node became not ready" node="multinode-894400" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-18T13:09:48Z","lastTransitionTime":"2024-03-18T13:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0318 13:11:01.016321    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.865400    1532 apiserver.go:52] "Watching apiserver"
	I0318 13:11:01.016377    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872433    1532 topology_manager.go:215] "Topology Admit Handler" podUID="0afe25f8-cbd6-412b-8698-7b547d1d49ca" podNamespace="kube-system" podName="kube-proxy-mc5tv"
	I0318 13:11:01.016377    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872584    1532 topology_manager.go:215] "Topology Admit Handler" podUID="0161d239-2d85-4246-b2fa-6c7374f2ecd6" podNamespace="kube-system" podName="kindnet-hhsxh"
	I0318 13:11:01.016429    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872794    1532 topology_manager.go:215] "Topology Admit Handler" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67" podNamespace="kube-system" podName="coredns-5dd5756b68-456tm"
	I0318 13:11:01.016429    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872862    1532 topology_manager.go:215] "Topology Admit Handler" podUID="219bafbc-d807-44cf-9927-e4957f36ad70" podNamespace="kube-system" podName="storage-provisioner"
	I0318 13:11:01.016485    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872944    1532 topology_manager.go:215] "Topology Admit Handler" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f" podNamespace="default" podName="busybox-5b5d89c9d6-c2997"
	I0318 13:11:01.016485    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.873248    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.016536    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.873593    1532 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-894400" podUID="62aca0ea-36b0-4841-9616-61448f45e04a"
	I0318 13:11:01.016592    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.873861    1532 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-894400" podUID="672a85d9-7526-4870-a33a-eac509ef3c3f"
	I0318 13:11:01.016592    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.876751    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.016697    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.889248    1532 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0318 13:11:01.016697    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.964782    1532 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:11:01.016747    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.965861    1532 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-894400"
	I0318 13:11:01.016747    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966709    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0afe25f8-cbd6-412b-8698-7b547d1d49ca-lib-modules\") pod \"kube-proxy-mc5tv\" (UID: \"0afe25f8-cbd6-412b-8698-7b547d1d49ca\") " pod="kube-system/kube-proxy-mc5tv"
	I0318 13:11:01.016804    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966761    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/219bafbc-d807-44cf-9927-e4957f36ad70-tmp\") pod \"storage-provisioner\" (UID: \"219bafbc-d807-44cf-9927-e4957f36ad70\") " pod="kube-system/storage-provisioner"
	I0318 13:11:01.016804    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966802    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0161d239-2d85-4246-b2fa-6c7374f2ecd6-cni-cfg\") pod \"kindnet-hhsxh\" (UID: \"0161d239-2d85-4246-b2fa-6c7374f2ecd6\") " pod="kube-system/kindnet-hhsxh"
	I0318 13:11:01.016856    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966847    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0afe25f8-cbd6-412b-8698-7b547d1d49ca-xtables-lock\") pod \"kube-proxy-mc5tv\" (UID: \"0afe25f8-cbd6-412b-8698-7b547d1d49ca\") " pod="kube-system/kube-proxy-mc5tv"
	I0318 13:11:01.016912    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966908    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0161d239-2d85-4246-b2fa-6c7374f2ecd6-xtables-lock\") pod \"kindnet-hhsxh\" (UID: \"0161d239-2d85-4246-b2fa-6c7374f2ecd6\") " pod="kube-system/kindnet-hhsxh"
	I0318 13:11:01.016912    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966943    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0161d239-2d85-4246-b2fa-6c7374f2ecd6-lib-modules\") pod \"kindnet-hhsxh\" (UID: \"0161d239-2d85-4246-b2fa-6c7374f2ecd6\") " pod="kube-system/kindnet-hhsxh"
	I0318 13:11:01.016985    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.968339    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:01.017042    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.968477    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:49.468437755 +0000 UTC m=+6.779274091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:01.017042    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.000742    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017094    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.000961    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017094    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.001575    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:49.501554367 +0000 UTC m=+6.812390603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017094    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.048369    1532 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c396fd459c503d2e9464c73cc841d3d8" path="/var/lib/kubelet/pods/c396fd459c503d2e9464c73cc841d3d8/volumes"
	I0318 13:11:01.017094    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.051334    1532 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="decc1d942b4d81359bb79c0349ffe9bb" path="/var/lib/kubelet/pods/decc1d942b4d81359bb79c0349ffe9bb/volumes"
	I0318 13:11:01.017238    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.248524    1532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-894400" podStartSLOduration=0.2483832 podCreationTimestamp="2024-03-18 13:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 13:09:49.21292898 +0000 UTC m=+6.523765316" watchObservedRunningTime="2024-03-18 13:09:49.2483832 +0000 UTC m=+6.559219436"
	I0318 13:11:01.017317    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.285710    1532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-894400" podStartSLOduration=0.285684326 podCreationTimestamp="2024-03-18 13:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 13:09:49.252285313 +0000 UTC m=+6.563121649" watchObservedRunningTime="2024-03-18 13:09:49.285684326 +0000 UTC m=+6.596520662"
	I0318 13:11:01.017317    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.471617    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:01.017376    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.472236    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:50.471713653 +0000 UTC m=+7.782549889 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:01.017417    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.573240    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017493    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.573347    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017562    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.573459    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:50.573441997 +0000 UTC m=+7.884278233 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017594    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.813611    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41035eff3b7db97e56ba25be141bf644acea4ddcb9d92ba3b421e6fa855a09af"
	I0318 13:11:01.017625    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: I0318 13:09:50.142572    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86d74dec812cfd4842de7473d45ff320bfb2283d09d2d05eee29e45cbde9f5e9"
	I0318 13:11:01.017646    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: I0318 13:09:50.219092    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9f21749669fe37a2d5bee9e7f45bf84eca6ae55870de8ac18bc774db7f81643"
	I0318 13:11:01.017684    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.481085    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:01.017721    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.481271    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:52.48125246 +0000 UTC m=+9.792088696 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:01.017765    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.581790    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017765    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.581835    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.581885    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:52.5818703 +0000 UTC m=+9.892706536 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:51 multinode-894400 kubelet[1532]: E0318 13:09:51.011273    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:51 multinode-894400 kubelet[1532]: E0318 13:09:51.012015    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.499973    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.500149    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:56.500131973 +0000 UTC m=+13.810968209 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.601982    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.602006    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.602087    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:56.602073317 +0000 UTC m=+13.912909553 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:53 multinode-894400 kubelet[1532]: E0318 13:09:53.009672    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:53 multinode-894400 kubelet[1532]: E0318 13:09:53.010317    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:55 multinode-894400 kubelet[1532]: E0318 13:09:55.010917    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:55 multinode-894400 kubelet[1532]: E0318 13:09:55.011786    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.539408    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:01.018375    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.539534    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:10:04.539515204 +0000 UTC m=+21.850351440 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:01.018447    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.639919    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.018447    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.639948    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.018531    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.639998    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:10:04.639981843 +0000 UTC m=+21.950818079 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.018585    2404 command_runner.go:130] > Mar 18 13:09:57 multinode-894400 kubelet[1532]: E0318 13:09:57.009521    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.018640    2404 command_runner.go:130] > Mar 18 13:09:57 multinode-894400 kubelet[1532]: E0318 13:09:57.010257    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.018640    2404 command_runner.go:130] > Mar 18 13:09:59 multinode-894400 kubelet[1532]: E0318 13:09:59.011021    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.018698    2404 command_runner.go:130] > Mar 18 13:09:59 multinode-894400 kubelet[1532]: E0318 13:09:59.011361    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.018698    2404 command_runner.go:130] > Mar 18 13:10:01 multinode-894400 kubelet[1532]: E0318 13:10:01.009167    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.018748    2404 command_runner.go:130] > Mar 18 13:10:01 multinode-894400 kubelet[1532]: E0318 13:10:01.009678    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.018804    2404 command_runner.go:130] > Mar 18 13:10:03 multinode-894400 kubelet[1532]: E0318 13:10:03.010168    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.018854    2404 command_runner.go:130] > Mar 18 13:10:03 multinode-894400 kubelet[1532]: E0318 13:10:03.011736    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.018854    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.603257    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:01.018909    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.603387    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:10:20.60337037 +0000 UTC m=+37.914206606 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:01.018960    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.704132    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.018960    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.704169    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.019034    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.704219    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:10:20.704204798 +0000 UTC m=+38.015041034 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.019082    2404 command_runner.go:130] > Mar 18 13:10:05 multinode-894400 kubelet[1532]: E0318 13:10:05.009461    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.019127    2404 command_runner.go:130] > Mar 18 13:10:05 multinode-894400 kubelet[1532]: E0318 13:10:05.010204    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.019154    2404 command_runner.go:130] > Mar 18 13:10:07 multinode-894400 kubelet[1532]: E0318 13:10:07.009925    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.019193    2404 command_runner.go:130] > Mar 18 13:10:07 multinode-894400 kubelet[1532]: E0318 13:10:07.010942    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.019263    2404 command_runner.go:130] > Mar 18 13:10:09 multinode-894400 kubelet[1532]: E0318 13:10:09.010506    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.019312    2404 command_runner.go:130] > Mar 18 13:10:09 multinode-894400 kubelet[1532]: E0318 13:10:09.011883    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.019312    2404 command_runner.go:130] > Mar 18 13:10:11 multinode-894400 kubelet[1532]: E0318 13:10:11.009145    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.019364    2404 command_runner.go:130] > Mar 18 13:10:11 multinode-894400 kubelet[1532]: E0318 13:10:11.011730    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.019424    2404 command_runner.go:130] > Mar 18 13:10:13 multinode-894400 kubelet[1532]: E0318 13:10:13.010103    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.019424    2404 command_runner.go:130] > Mar 18 13:10:13 multinode-894400 kubelet[1532]: E0318 13:10:13.010921    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:15 multinode-894400 kubelet[1532]: E0318 13:10:15.009361    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:15 multinode-894400 kubelet[1532]: E0318 13:10:15.010565    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:17 multinode-894400 kubelet[1532]: E0318 13:10:17.009688    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:17 multinode-894400 kubelet[1532]: E0318 13:10:17.010200    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:19 multinode-894400 kubelet[1532]: E0318 13:10:19.010187    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:19 multinode-894400 kubelet[1532]: E0318 13:10:19.010730    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.639546    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.639747    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:10:52.639723825 +0000 UTC m=+69.950560161 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.740353    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.740517    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.740585    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:10:52.740566824 +0000 UTC m=+70.051403160 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: E0318 13:10:21.010015    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: E0318 13:10:21.011108    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: I0318 13:10:21.647969    1532 scope.go:117] "RemoveContainer" containerID="a2c499223090cc38a7b425469621fb6c8dbc443ab7eb0d5841f1fdcea2922366"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: I0318 13:10:21.651387    1532 scope.go:117] "RemoveContainer" containerID="46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: E0318 13:10:21.652104    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(219bafbc-d807-44cf-9927-e4957f36ad70)\"" pod="kube-system/storage-provisioner" podUID="219bafbc-d807-44cf-9927-e4957f36ad70"
	I0318 13:11:01.020133    2404 command_runner.go:130] > Mar 18 13:10:23 multinode-894400 kubelet[1532]: E0318 13:10:23.010116    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.020237    2404 command_runner.go:130] > Mar 18 13:10:23 multinode-894400 kubelet[1532]: E0318 13:10:23.010816    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.020237    2404 command_runner.go:130] > Mar 18 13:10:23 multinode-894400 kubelet[1532]: I0318 13:10:23.777913    1532 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	I0318 13:11:01.020237    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 kubelet[1532]: I0318 13:10:35.009532    1532 scope.go:117] "RemoveContainer" containerID="46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264"
	I0318 13:11:01.020237    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]: I0318 13:10:43.012571    1532 scope.go:117] "RemoveContainer" containerID="56d1819beb10ed198593d8a369f601faf82bf81ff1aecdbffe7114cd1265351b"
	I0318 13:11:01.020237    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]: E0318 13:10:43.030354    1532 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 13:11:01.020237    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 13:11:01.020237    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 13:11:01.020237    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 13:11:01.020237    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 13:11:01.020237    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]: I0318 13:10:43.056417    1532 scope.go:117] "RemoveContainer" containerID="c51f768a2f642fdffc6de67f101be5abd8bbaec83ef13011b47efab5aad27134"
	I0318 13:11:01.059892    2404 logs.go:123] Gathering logs for etcd [5f0887d1e691] ...
	I0318 13:11:01.059892    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0887d1e691"
	I0318 13:11:01.087769    2404 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T13:09:44.778754Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 13:11:01.088679    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.779618Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.30.130.156:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.30.130.156:2380","--initial-cluster=multinode-894400=https://172.30.130.156:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.30.130.156:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.30.130.156:2380","--name=multinode-894400","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0318 13:11:01.088679    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.780287Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0318 13:11:01.088679    2404 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T13:09:44.780316Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 13:11:01.088679    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.780326Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.30.130.156:2380"]}
	I0318 13:11:01.088679    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.780518Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 13:11:01.088679    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.782775Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.30.130.156:2379"]}
	I0318 13:11:01.088679    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.785511Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-894400","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.30.130.156:2380"],"listen-peer-urls":["https://172.30.130.156:2380"],"advertise-client-urls":["https://172.30.130.156:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.30.130.156:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"init
ial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0318 13:11:01.088679    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.809621Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"22.951578ms"}
	I0318 13:11:01.089225    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.849189Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0318 13:11:01.089225    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.872854Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"2db881e830cc2153","local-member-id":"c2557cd98fa8d31a","commit-index":1981}
	I0318 13:11:01.089225    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.87358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a switched to configuration voters=()"}
	I0318 13:11:01.089225    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.873736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became follower at term 2"}
	I0318 13:11:01.089225    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.873929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft c2557cd98fa8d31a [peers: [], term: 2, commit: 1981, applied: 0, lastindex: 1981, lastterm: 2]"}
	I0318 13:11:01.089367    2404 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T13:09:44.887865Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	I0318 13:11:01.089367    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.892732Z","caller":"mvcc/kvstore.go:323","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1376}
	I0318 13:11:01.089367    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.89955Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1715}
	I0318 13:11:01.089367    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.914592Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0318 13:11:01.089466    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.926835Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"c2557cd98fa8d31a","timeout":"7s"}
	I0318 13:11:01.089466    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.928545Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"c2557cd98fa8d31a"}
	I0318 13:11:01.089466    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.930225Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"c2557cd98fa8d31a","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	I0318 13:11:01.089466    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.930859Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	I0318 13:11:01.089556    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.931762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a switched to configuration voters=(14003235890238378778)"}
	I0318 13:11:01.089556    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.932128Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2db881e830cc2153","local-member-id":"c2557cd98fa8d31a","added-peer-id":"c2557cd98fa8d31a","added-peer-peer-urls":["https://172.30.129.141:2380"]}
	I0318 13:11:01.089633    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.933388Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2db881e830cc2153","local-member-id":"c2557cd98fa8d31a","cluster-version":"3.5"}
	I0318 13:11:01.089633    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.933717Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0318 13:11:01.089633    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.946226Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0318 13:11:01.089736    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.947818Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0318 13:11:01.089736    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.948803Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0318 13:11:01.089826    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.954567Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 13:11:01.089826    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.954988Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"c2557cd98fa8d31a","initial-advertise-peer-urls":["https://172.30.130.156:2380"],"listen-peer-urls":["https://172.30.130.156:2380"],"advertise-client-urls":["https://172.30.130.156:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.30.130.156:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0318 13:11:01.089912    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.955173Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0318 13:11:01.089912    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.954599Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.30.130.156:2380"}
	I0318 13:11:01.089912    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.956126Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.30.130.156:2380"}
	I0318 13:11:01.089912    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a is starting a new election at term 2"}
	I0318 13:11:01.090001    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became pre-candidate at term 2"}
	I0318 13:11:01.090001    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a received MsgPreVoteResp from c2557cd98fa8d31a at term 2"}
	I0318 13:11:01.090001    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became candidate at term 3"}
	I0318 13:11:01.090001    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.77574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a received MsgVoteResp from c2557cd98fa8d31a at term 3"}
	I0318 13:11:01.090001    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became leader at term 3"}
	I0318 13:11:01.090001    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c2557cd98fa8d31a elected leader c2557cd98fa8d31a at term 3"}
	I0318 13:11:01.090105    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.782683Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c2557cd98fa8d31a","local-member-attributes":"{Name:multinode-894400 ClientURLs:[https://172.30.130.156:2379]}","request-path":"/0/members/c2557cd98fa8d31a/attributes","cluster-id":"2db881e830cc2153","publish-timeout":"7s"}
	I0318 13:11:01.090105    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.78269Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 13:11:01.090105    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.782706Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 13:11:01.090105    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.783976Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.30.130.156:2379"}
	I0318 13:11:01.090205    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.783993Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0318 13:11:01.090205    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.788664Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0318 13:11:01.090205    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.788817Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0318 13:11:01.097528    2404 logs.go:123] Gathering logs for kube-scheduler [66ee8be9fada] ...
	I0318 13:11:01.097588    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ee8be9fada"
	I0318 13:11:01.122600    2404 command_runner.go:130] ! I0318 13:09:45.699415       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:11:01.122600    2404 command_runner.go:130] ! W0318 13:09:48.342100       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 13:11:01.122914    2404 command_runner.go:130] ! W0318 13:09:48.342243       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:01.122914    2404 command_runner.go:130] ! W0318 13:09:48.342324       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 13:11:01.122987    2404 command_runner.go:130] ! W0318 13:09:48.342374       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:11:01.122987    2404 command_runner.go:130] ! I0318 13:09:48.402495       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 13:11:01.122987    2404 command_runner.go:130] ! I0318 13:09:48.402540       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:01.122987    2404 command_runner.go:130] ! I0318 13:09:48.407228       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:11:01.122987    2404 command_runner.go:130] ! I0318 13:09:48.409117       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:11:01.122987    2404 command_runner.go:130] ! I0318 13:09:48.410197       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 13:11:01.123058    2404 command_runner.go:130] ! I0318 13:09:48.410738       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:11:01.123058    2404 command_runner.go:130] ! I0318 13:09:48.510577       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:11:01.125506    2404 logs.go:123] Gathering logs for kube-proxy [9335855aab63] ...
	I0318 13:11:01.125577    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335855aab63"
	I0318 13:11:01.158945    2404 command_runner.go:130] ! I0318 12:47:42.888603       1 server_others.go:69] "Using iptables proxy"
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.909658       1 node.go:141] Successfully retrieved node IP: 172.30.129.141
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.965774       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.965824       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.983172       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.983221       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.983471       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.983484       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.987719       1 config.go:188] "Starting service config controller"
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.987733       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.987775       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.987781       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:11:01.159833    2404 command_runner.go:130] ! I0318 12:47:42.988298       1 config.go:315] "Starting node config controller"
	I0318 13:11:01.159833    2404 command_runner.go:130] ! I0318 12:47:42.988306       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:11:01.159833    2404 command_runner.go:130] ! I0318 12:47:43.088485       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:11:01.159833    2404 command_runner.go:130] ! I0318 12:47:43.088594       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:11:01.159833    2404 command_runner.go:130] ! I0318 12:47:43.088517       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:11:01.161255    2404 logs.go:123] Gathering logs for coredns [3c3bc988c74c] ...
	I0318 13:11:01.161255    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c3bc988c74c"
	I0318 13:11:01.188865    2404 command_runner.go:130] > .:53
	I0318 13:11:01.188865    2404 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fa2b120b3007aba3ef70c07d73473d14986404bfc5683cceeb89e8950d5ace894afca7d43365324975f99d1b2da3d6473c722fe82795d39a4ee8b4b969b7aa71
	I0318 13:11:01.188865    2404 command_runner.go:130] > CoreDNS-1.10.1
	I0318 13:11:01.188865    2404 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 13:11:01.188865    2404 command_runner.go:130] > [INFO] 127.0.0.1:47251 - 801 "HINFO IN 2968659138506762197.6766024496084331989. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051583557s
	I0318 13:11:01.188865    2404 logs.go:123] Gathering logs for kube-controller-manager [7aa5cf4ec378] ...
	I0318 13:11:01.188865    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa5cf4ec378"
	I0318 13:11:01.215116    2404 command_runner.go:130] ! I0318 12:47:22.447675       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:11:01.216007    2404 command_runner.go:130] ! I0318 12:47:22.964394       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 13:11:01.216007    2404 command_runner.go:130] ! I0318 12:47:22.964509       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:01.216007    2404 command_runner.go:130] ! I0318 12:47:22.966671       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:11:01.216007    2404 command_runner.go:130] ! I0318 12:47:22.967091       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:11:01.216007    2404 command_runner.go:130] ! I0318 12:47:22.968348       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 13:11:01.216132    2404 command_runner.go:130] ! I0318 12:47:22.969286       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:11:01.216132    2404 command_runner.go:130] ! I0318 12:47:27.391471       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 13:11:01.216159    2404 command_runner.go:130] ! I0318 12:47:27.423488       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 13:11:01.216159    2404 command_runner.go:130] ! I0318 12:47:27.424256       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 13:11:01.216159    2404 command_runner.go:130] ! I0318 12:47:27.424289       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:11:01.216159    2404 command_runner.go:130] ! I0318 12:47:27.424374       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 13:11:01.216256    2404 command_runner.go:130] ! I0318 12:47:27.451725       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 13:11:01.216256    2404 command_runner.go:130] ! I0318 12:47:27.451967       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 13:11:01.216256    2404 command_runner.go:130] ! I0318 12:47:27.452425       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 13:11:01.216256    2404 command_runner.go:130] ! I0318 12:47:27.464873       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 13:11:01.216256    2404 command_runner.go:130] ! I0318 12:47:27.465150       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 13:11:01.216318    2404 command_runner.go:130] ! I0318 12:47:27.465172       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 13:11:01.216458    2404 command_runner.go:130] ! I0318 12:47:27.491949       1 shared_informer.go:318] Caches are synced for tokens
	I0318 13:11:01.216458    2404 command_runner.go:130] ! I0318 12:47:37.491900       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 13:11:01.216458    2404 command_runner.go:130] ! I0318 12:47:37.492009       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 13:11:01.216458    2404 command_runner.go:130] ! I0318 12:47:37.492602       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 13:11:01.216458    2404 command_runner.go:130] ! I0318 12:47:37.492659       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 13:11:01.216458    2404 command_runner.go:130] ! E0318 12:47:37.494780       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 13:11:01.216558    2404 command_runner.go:130] ! I0318 12:47:37.494859       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 13:11:01.216558    2404 command_runner.go:130] ! I0318 12:47:37.511992       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 13:11:01.216558    2404 command_runner.go:130] ! I0318 12:47:37.512162       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 13:11:01.216558    2404 command_runner.go:130] ! I0318 12:47:37.512576       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 13:11:01.216746    2404 command_runner.go:130] ! I0318 12:47:37.525022       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 13:11:01.216746    2404 command_runner.go:130] ! I0318 12:47:37.525273       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 13:11:01.216746    2404 command_runner.go:130] ! I0318 12:47:37.525287       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 13:11:01.216746    2404 command_runner.go:130] ! I0318 12:47:37.540701       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 13:11:01.216746    2404 command_runner.go:130] ! I0318 12:47:37.540905       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 13:11:01.216746    2404 command_runner.go:130] ! I0318 12:47:37.540914       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 13:11:01.217297    2404 command_runner.go:130] ! I0318 12:47:37.562000       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 13:11:01.217526    2404 command_runner.go:130] ! I0318 12:47:37.562256       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 13:11:01.217526    2404 command_runner.go:130] ! I0318 12:47:37.562286       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 13:11:01.217611    2404 command_runner.go:130] ! I0318 12:47:37.574397       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 13:11:01.217815    2404 command_runner.go:130] ! I0318 12:47:37.574869       1 expand_controller.go:328] "Starting expand controller"
	I0318 13:11:01.218636    2404 command_runner.go:130] ! I0318 12:47:37.574937       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 13:11:01.218737    2404 command_runner.go:130] ! I0318 12:47:37.587914       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 13:11:01.218737    2404 command_runner.go:130] ! I0318 12:47:37.588166       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 13:11:01.218737    2404 command_runner.go:130] ! I0318 12:47:37.588199       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 13:11:01.218737    2404 command_runner.go:130] ! I0318 12:47:37.609721       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 13:11:01.218799    2404 command_runner.go:130] ! I0318 12:47:37.615354       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 13:11:01.218799    2404 command_runner.go:130] ! I0318 12:47:37.615371       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 13:11:01.218824    2404 command_runner.go:130] ! I0318 12:47:37.624660       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 13:11:01.218824    2404 command_runner.go:130] ! I0318 12:47:37.624898       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.625063       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.637461       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.637588       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.637699       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.649314       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.650380       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.650462       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.830447       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.830565       1 disruption.go:433] "Sending events to api server."
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.830686       1 disruption.go:444] "Starting disruption controller"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.830725       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.985254       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.985453       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.985784       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.288543       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.289132       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.289248       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.289520       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.289722       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.289927       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.290240       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.290340       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.290418       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.290502       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.290550       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.290591       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.290851       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.291026       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.291117       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.291149       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.291277       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.291315       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.291392       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.291423       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 13:11:01.219501    2404 command_runner.go:130] ! I0318 12:47:38.291465       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 13:11:01.219728    2404 command_runner.go:130] ! I0318 12:47:38.291591       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 13:11:01.219728    2404 command_runner.go:130] ! I0318 12:47:38.291607       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:11:01.219728    2404 command_runner.go:130] ! I0318 12:47:38.291720       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 13:11:01.219793    2404 command_runner.go:130] ! I0318 12:47:38.436018       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 13:11:01.219793    2404 command_runner.go:130] ! I0318 12:47:38.436093       1 job_controller.go:226] "Starting job controller"
	I0318 13:11:01.219793    2404 command_runner.go:130] ! I0318 12:47:38.436112       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 13:11:01.219793    2404 command_runner.go:130] ! I0318 12:47:38.731490       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 13:11:01.219912    2404 command_runner.go:130] ! I0318 12:47:38.731606       1 horizontal.go:200] "Starting HPA controller"
	I0318 13:11:01.219954    2404 command_runner.go:130] ! I0318 12:47:38.731671       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 13:11:01.219954    2404 command_runner.go:130] ! I0318 12:47:38.886224       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 13:11:01.219954    2404 command_runner.go:130] ! I0318 12:47:38.886401       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 13:11:01.220022    2404 command_runner.go:130] ! I0318 12:47:38.886705       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 13:11:01.220022    2404 command_runner.go:130] ! I0318 12:47:38.930325       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 13:11:01.220022    2404 command_runner.go:130] ! I0318 12:47:38.930354       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 13:11:01.220085    2404 command_runner.go:130] ! I0318 12:47:38.930362       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 13:11:01.220085    2404 command_runner.go:130] ! I0318 12:47:38.930398       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 13:11:01.220085    2404 command_runner.go:130] ! I0318 12:47:39.085782       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 13:11:01.220085    2404 command_runner.go:130] ! I0318 12:47:39.085905       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 13:11:01.220144    2404 command_runner.go:130] ! I0318 12:47:39.085920       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 13:11:01.220144    2404 command_runner.go:130] ! I0318 12:47:39.236755       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 13:11:01.220144    2404 command_runner.go:130] ! I0318 12:47:39.237434       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 13:11:01.220144    2404 command_runner.go:130] ! I0318 12:47:39.237522       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 13:11:01.220207    2404 command_runner.go:130] ! I0318 12:47:39.390953       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 13:11:01.220207    2404 command_runner.go:130] ! I0318 12:47:39.391480       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 13:11:01.220207    2404 command_runner.go:130] ! I0318 12:47:39.391646       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 13:11:01.220207    2404 command_runner.go:130] ! I0318 12:47:39.535570       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 13:11:01.220265    2404 command_runner.go:130] ! I0318 12:47:39.536071       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 13:11:01.220265    2404 command_runner.go:130] ! I0318 12:47:39.536172       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 13:11:01.220265    2404 command_runner.go:130] ! I0318 12:47:39.582776       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 13:11:01.220265    2404 command_runner.go:130] ! I0318 12:47:39.582876       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 13:11:01.220341    2404 command_runner.go:130] ! I0318 12:47:39.582912       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:01.220341    2404 command_runner.go:130] ! I0318 12:47:39.584602       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 13:11:01.220341    2404 command_runner.go:130] ! I0318 12:47:39.584677       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 13:11:01.220341    2404 command_runner.go:130] ! I0318 12:47:39.584724       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:01.220396    2404 command_runner.go:130] ! I0318 12:47:39.585974       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 13:11:01.220396    2404 command_runner.go:130] ! I0318 12:47:39.585990       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 13:11:01.220396    2404 command_runner.go:130] ! I0318 12:47:39.586012       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:01.220454    2404 command_runner.go:130] ! I0318 12:47:39.586910       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 13:11:01.220454    2404 command_runner.go:130] ! I0318 12:47:39.586968       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 13:11:01.220454    2404 command_runner.go:130] ! I0318 12:47:39.586975       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 13:11:01.220454    2404 command_runner.go:130] ! I0318 12:47:39.587044       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:01.220507    2404 command_runner.go:130] ! I0318 12:47:39.735265       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 13:11:01.220507    2404 command_runner.go:130] ! I0318 12:47:39.735467       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 13:11:01.220507    2404 command_runner.go:130] ! I0318 12:47:39.735494       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 13:11:01.220507    2404 command_runner.go:130] ! I0318 12:47:39.735502       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 13:11:01.220507    2404 command_runner.go:130] ! I0318 12:47:39.783594       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 13:11:01.220567    2404 command_runner.go:130] ! I0318 12:47:39.783722       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 13:11:01.220567    2404 command_runner.go:130] ! I0318 12:47:39.783841       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 13:11:01.220567    2404 command_runner.go:130] ! I0318 12:47:39.783860       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 13:11:01.220567    2404 command_runner.go:130] ! I0318 12:47:39.784031       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 13:11:01.220622    2404 command_runner.go:130] ! E0318 12:47:39.937206       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 13:11:01.220622    2404 command_runner.go:130] ! I0318 12:47:39.937229       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 13:11:01.220622    2404 command_runner.go:130] ! I0318 12:47:40.089508       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 13:11:01.220622    2404 command_runner.go:130] ! I0318 12:47:40.089701       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 13:11:01.220697    2404 command_runner.go:130] ! I0318 12:47:40.089793       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 13:11:01.220697    2404 command_runner.go:130] ! I0318 12:47:40.235860       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 13:11:01.220697    2404 command_runner.go:130] ! I0318 12:47:40.235977       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 13:11:01.220697    2404 command_runner.go:130] ! I0318 12:47:40.236063       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 13:11:01.220697    2404 command_runner.go:130] ! I0318 12:47:40.386545       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 13:11:01.220749    2404 command_runner.go:130] ! I0318 12:47:40.386692       1 gc_controller.go:101] "Starting GC controller"
	I0318 13:11:01.220749    2404 command_runner.go:130] ! I0318 12:47:40.386704       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 13:11:01.220749    2404 command_runner.go:130] ! I0318 12:47:40.644175       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 13:11:01.220801    2404 command_runner.go:130] ! I0318 12:47:40.644284       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 13:11:01.220801    2404 command_runner.go:130] ! I0318 12:47:40.644293       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 13:11:01.220801    2404 command_runner.go:130] ! I0318 12:47:40.784991       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 13:11:01.220855    2404 command_runner.go:130] ! I0318 12:47:40.785464       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 13:11:01.220855    2404 command_runner.go:130] ! I0318 12:47:40.785492       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 13:11:01.220855    2404 command_runner.go:130] ! I0318 12:47:40.936785       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 13:11:01.220855    2404 command_runner.go:130] ! I0318 12:47:40.939800       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 13:11:01.220925    2404 command_runner.go:130] ! I0318 12:47:40.947184       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:11:01.220925    2404 command_runner.go:130] ! I0318 12:47:40.968017       1 shared_informer.go:318] Caches are synced for service account
	I0318 13:11:01.220925    2404 command_runner.go:130] ! I0318 12:47:40.971773       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:11:01.220925    2404 command_runner.go:130] ! I0318 12:47:40.976691       1 shared_informer.go:318] Caches are synced for expand
	I0318 13:11:01.220925    2404 command_runner.go:130] ! I0318 12:47:40.986014       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 13:11:01.220996    2404 command_runner.go:130] ! I0318 12:47:40.995675       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 13:11:01.220996    2404 command_runner.go:130] ! I0318 12:47:41.009015       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400\" does not exist"
	I0318 13:11:01.220996    2404 command_runner.go:130] ! I0318 12:47:41.012612       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 13:11:01.220996    2404 command_runner.go:130] ! I0318 12:47:41.016383       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 13:11:01.221057    2404 command_runner.go:130] ! I0318 12:47:41.025198       1 shared_informer.go:318] Caches are synced for TTL
	I0318 13:11:01.221057    2404 command_runner.go:130] ! I0318 12:47:41.025462       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 13:11:01.221057    2404 command_runner.go:130] ! I0318 12:47:41.032086       1 shared_informer.go:318] Caches are synced for HPA
	I0318 13:11:01.221057    2404 command_runner.go:130] ! I0318 12:47:41.036463       1 shared_informer.go:318] Caches are synced for job
	I0318 13:11:01.221057    2404 command_runner.go:130] ! I0318 12:47:41.036622       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 13:11:01.221057    2404 command_runner.go:130] ! I0318 12:47:41.036726       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 13:11:01.221116    2404 command_runner.go:130] ! I0318 12:47:41.037735       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 13:11:01.221116    2404 command_runner.go:130] ! I0318 12:47:41.037818       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 13:11:01.221116    2404 command_runner.go:130] ! I0318 12:47:41.040360       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 13:11:01.221116    2404 command_runner.go:130] ! I0318 12:47:41.041850       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 13:11:01.221173    2404 command_runner.go:130] ! I0318 12:47:41.045379       1 shared_informer.go:318] Caches are synced for namespace
	I0318 13:11:01.221173    2404 command_runner.go:130] ! I0318 12:47:41.051530       1 shared_informer.go:318] Caches are synced for deployment
	I0318 13:11:01.221173    2404 command_runner.go:130] ! I0318 12:47:41.053151       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 13:11:01.221173    2404 command_runner.go:130] ! I0318 12:47:41.063027       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 13:11:01.221227    2404 command_runner.go:130] ! I0318 12:47:41.084212       1 shared_informer.go:318] Caches are synced for taint
	I0318 13:11:01.221227    2404 command_runner.go:130] ! I0318 12:47:41.084612       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 13:11:01.221227    2404 command_runner.go:130] ! I0318 12:47:41.087983       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 13:11:01.221227    2404 command_runner.go:130] ! I0318 12:47:41.088464       1 taint_manager.go:210] "Sending events to api server"
	I0318 13:11:01.221227    2404 command_runner.go:130] ! I0318 12:47:41.089485       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400"
	I0318 13:11:01.221301    2404 command_runner.go:130] ! I0318 12:47:41.089526       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0318 13:11:01.221301    2404 command_runner.go:130] ! I0318 12:47:41.089552       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 13:11:01.221301    2404 command_runner.go:130] ! I0318 12:47:41.089942       1 shared_informer.go:318] Caches are synced for GC
	I0318 13:11:01.221301    2404 command_runner.go:130] ! I0318 12:47:41.090031       1 event.go:307] "Event occurred" object="multinode-894400" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400 event: Registered Node multinode-894400 in Controller"
	I0318 13:11:01.221354    2404 command_runner.go:130] ! I0318 12:47:41.090167       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 13:11:01.221354    2404 command_runner.go:130] ! I0318 12:47:41.090848       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 13:11:01.221354    2404 command_runner.go:130] ! I0318 12:47:41.092093       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 13:11:01.221354    2404 command_runner.go:130] ! I0318 12:47:41.092684       1 shared_informer.go:318] Caches are synced for node
	I0318 13:11:01.221354    2404 command_runner.go:130] ! I0318 12:47:41.093255       1 range_allocator.go:174] "Sending events to api server"
	I0318 13:11:01.221419    2404 command_runner.go:130] ! I0318 12:47:41.093537       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 13:11:01.221419    2404 command_runner.go:130] ! I0318 12:47:41.093851       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 13:11:01.221419    2404 command_runner.go:130] ! I0318 12:47:41.093958       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 13:11:01.221419    2404 command_runner.go:130] ! I0318 12:47:41.119414       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400" podCIDRs=["10.244.0.0/24"]
	I0318 13:11:01.221419    2404 command_runner.go:130] ! I0318 12:47:41.148134       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:11:01.221480    2404 command_runner.go:130] ! I0318 12:47:41.183853       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 13:11:01.221480    2404 command_runner.go:130] ! I0318 12:47:41.184949       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 13:11:01.221480    2404 command_runner.go:130] ! I0318 12:47:41.186043       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 13:11:01.221480    2404 command_runner.go:130] ! I0318 12:47:41.187192       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 13:11:01.221480    2404 command_runner.go:130] ! I0318 12:47:41.187229       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 13:11:01.221564    2404 command_runner.go:130] ! I0318 12:47:41.192066       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:11:01.221564    2404 command_runner.go:130] ! I0318 12:47:41.233781       1 shared_informer.go:318] Caches are synced for disruption
	I0318 13:11:01.221564    2404 command_runner.go:130] ! I0318 12:47:41.572914       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:11:01.221564    2404 command_runner.go:130] ! I0318 12:47:41.612936       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mc5tv"
	I0318 13:11:01.221623    2404 command_runner.go:130] ! I0318 12:47:41.615780       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hhsxh"
	I0318 13:11:01.221623    2404 command_runner.go:130] ! I0318 12:47:41.625871       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:11:01.221623    2404 command_runner.go:130] ! I0318 12:47:41.626335       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 13:11:01.221680    2404 command_runner.go:130] ! I0318 12:47:41.893141       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0318 13:11:01.221680    2404 command_runner.go:130] ! I0318 12:47:42.112244       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vl6jr"
	I0318 13:11:01.221680    2404 command_runner.go:130] ! I0318 12:47:42.148022       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-456tm"
	I0318 13:11:01.221741    2404 command_runner.go:130] ! I0318 12:47:42.181940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="289.6659ms"
	I0318 13:11:01.221741    2404 command_runner.go:130] ! I0318 12:47:42.245823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.840303ms"
	I0318 13:11:01.221741    2404 command_runner.go:130] ! I0318 12:47:42.246151       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.996µs"
	I0318 13:11:01.221741    2404 command_runner.go:130] ! I0318 12:47:42.470958       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0318 13:11:01.221797    2404 command_runner.go:130] ! I0318 12:47:42.530265       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-vl6jr"
	I0318 13:11:01.221797    2404 command_runner.go:130] ! I0318 12:47:42.551794       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.491503ms"
	I0318 13:11:01.221797    2404 command_runner.go:130] ! I0318 12:47:42.587026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="35.184179ms"
	I0318 13:11:01.221857    2404 command_runner.go:130] ! I0318 12:47:42.587126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.497µs"
	I0318 13:11:01.221857    2404 command_runner.go:130] ! I0318 12:47:52.958102       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="163.297µs"
	I0318 13:11:01.221857    2404 command_runner.go:130] ! I0318 12:47:52.991751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.399µs"
	I0318 13:11:01.221857    2404 command_runner.go:130] ! I0318 12:47:54.194916       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.289µs"
	I0318 13:11:01.221933    2404 command_runner.go:130] ! I0318 12:47:55.238088       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.595936ms"
	I0318 13:11:01.221933    2404 command_runner.go:130] ! I0318 12:47:55.238222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="45.592µs"
	I0318 13:11:01.221933    2404 command_runner.go:130] ! I0318 12:47:56.090728       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0318 13:11:01.221988    2404 command_runner.go:130] ! I0318 12:50:34.419485       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m02\" does not exist"
	I0318 13:11:01.221988    2404 command_runner.go:130] ! I0318 12:50:34.437576       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m02" podCIDRs=["10.244.1.0/24"]
	I0318 13:11:01.221988    2404 command_runner.go:130] ! I0318 12:50:34.454919       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8bdmn"
	I0318 13:11:01.221988    2404 command_runner.go:130] ! I0318 12:50:34.479103       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k5lpg"
	I0318 13:11:01.222056    2404 command_runner.go:130] ! I0318 12:50:36.121925       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m02"
	I0318 13:11:01.222056    2404 command_runner.go:130] ! I0318 12:50:36.122368       1 event.go:307] "Event occurred" object="multinode-894400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller"
	I0318 13:11:01.222110    2404 command_runner.go:130] ! I0318 12:50:52.539955       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:01.222110    2404 command_runner.go:130] ! I0318 12:51:17.964827       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0318 13:11:01.222110    2404 command_runner.go:130] ! I0318 12:51:17.986964       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-8btgf"
	I0318 13:11:01.222110    2404 command_runner.go:130] ! I0318 12:51:18.004592       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-c2997"
	I0318 13:11:01.222110    2404 command_runner.go:130] ! I0318 12:51:18.026894       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="62.79508ms"
	I0318 13:11:01.222193    2404 command_runner.go:130] ! I0318 12:51:18.045074       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="17.513513ms"
	I0318 13:11:01.222193    2404 command_runner.go:130] ! I0318 12:51:18.046404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="36.101µs"
	I0318 13:11:01.222219    2404 command_runner.go:130] ! I0318 12:51:18.054157       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="337.914µs"
	I0318 13:11:01.222219    2404 command_runner.go:130] ! I0318 12:51:18.060516       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="26.701µs"
	I0318 13:11:01.222260    2404 command_runner.go:130] ! I0318 12:51:20.804047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="10.125602ms"
	I0318 13:11:01.222260    2404 command_runner.go:130] ! I0318 12:51:20.804333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="159.502µs"
	I0318 13:11:01.222260    2404 command_runner.go:130] ! I0318 12:51:21.064706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="11.788417ms"
	I0318 13:11:01.222260    2404 command_runner.go:130] ! I0318 12:51:21.065229       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="82.401µs"
	I0318 13:11:01.222317    2404 command_runner.go:130] ! I0318 12:55:05.793350       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m03\" does not exist"
	I0318 13:11:01.222317    2404 command_runner.go:130] ! I0318 12:55:05.797095       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:01.222317    2404 command_runner.go:130] ! I0318 12:55:05.823205       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zv9tv"
	I0318 13:11:01.222371    2404 command_runner.go:130] ! I0318 12:55:05.835101       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m03" podCIDRs=["10.244.2.0/24"]
	I0318 13:11:01.222371    2404 command_runner.go:130] ! I0318 12:55:05.835149       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-745w9"
	I0318 13:11:01.222371    2404 command_runner.go:130] ! I0318 12:55:06.188986       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller"
	I0318 13:11:01.222425    2404 command_runner.go:130] ! I0318 12:55:06.188988       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m03"
	I0318 13:11:01.222425    2404 command_runner.go:130] ! I0318 12:55:23.671742       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:01.223029    2404 command_runner.go:130] ! I0318 13:02:46.325539       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:01.223029    2404 command_runner.go:130] ! I0318 13:02:46.325935       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-894400-m03 status is now: NodeNotReady"
	I0318 13:11:01.223029    2404 command_runner.go:130] ! I0318 13:02:46.344510       1 event.go:307] "Event occurred" object="kube-system/kindnet-zv9tv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:01.223625    2404 command_runner.go:130] ! I0318 13:02:46.368811       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-745w9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:01.223625    2404 command_runner.go:130] ! I0318 13:05:19.649225       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:01.223625    2404 command_runner.go:130] ! I0318 13:05:21.403124       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-894400-m03 event: Removing Node multinode-894400-m03 from Controller"
	I0318 13:11:01.223698    2404 command_runner.go:130] ! I0318 13:05:25.832056       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m03\" does not exist"
	I0318 13:11:01.223698    2404 command_runner.go:130] ! I0318 13:05:25.832348       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:01.223772    2404 command_runner.go:130] ! I0318 13:05:25.841443       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m03" podCIDRs=["10.244.3.0/24"]
	I0318 13:11:01.223772    2404 command_runner.go:130] ! I0318 13:05:26.404299       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller"
	I0318 13:11:01.223772    2404 command_runner.go:130] ! I0318 13:05:34.080951       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:01.223899    2404 command_runner.go:130] ! I0318 13:07:11.961036       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-894400-m03 status is now: NodeNotReady"
	I0318 13:11:01.223937    2404 command_runner.go:130] ! I0318 13:07:11.961077       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:01.223987    2404 command_runner.go:130] ! I0318 13:07:12.051526       1 event.go:307] "Event occurred" object="kube-system/kindnet-zv9tv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:01.223987    2404 command_runner.go:130] ! I0318 13:07:12.098168       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-745w9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:01.241587    2404 logs.go:123] Gathering logs for kindnet [c8e5ec25e910] ...
	I0318 13:11:01.241587    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e5ec25e910"
	I0318 13:11:01.267235    2404 command_runner.go:130] ! I0318 13:09:50.858529       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0318 13:11:01.267235    2404 command_runner.go:130] ! I0318 13:09:50.859271       1 main.go:107] hostIP = 172.30.130.156
	I0318 13:11:01.267235    2404 command_runner.go:130] ! podIP = 172.30.130.156
	I0318 13:11:01.267235    2404 command_runner.go:130] ! I0318 13:09:50.860380       1 main.go:116] setting mtu 1500 for CNI 
	I0318 13:11:01.267235    2404 command_runner.go:130] ! I0318 13:09:50.930132       1 main.go:146] kindnetd IP family: "ipv4"
	I0318 13:11:01.267235    2404 command_runner.go:130] ! I0318 13:09:50.933463       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0318 13:11:01.268209    2404 command_runner.go:130] ! I0318 13:10:21.283853       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0318 13:11:01.268209    2404 command_runner.go:130] ! I0318 13:10:21.335833       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:11:01.268209    2404 command_runner.go:130] ! I0318 13:10:21.335942       1 main.go:227] handling current node
	I0318 13:11:01.268209    2404 command_runner.go:130] ! I0318 13:10:21.336264       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:01.268265    2404 command_runner.go:130] ! I0318 13:10:21.336361       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:01.268293    2404 command_runner.go:130] ! I0318 13:10:21.336527       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.30.140.66 Flags: [] Table: 0} 
	I0318 13:11:01.268293    2404 command_runner.go:130] ! I0318 13:10:21.336670       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:01.268293    2404 command_runner.go:130] ! I0318 13:10:21.336680       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:01.268335    2404 command_runner.go:130] ! I0318 13:10:21.336727       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.30.137.140 Flags: [] Table: 0} 
	I0318 13:11:01.268335    2404 command_runner.go:130] ! I0318 13:10:31.343996       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:11:01.268335    2404 command_runner.go:130] ! I0318 13:10:31.344324       1 main.go:227] handling current node
	I0318 13:11:01.268335    2404 command_runner.go:130] ! I0318 13:10:31.344341       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:01.268403    2404 command_runner.go:130] ! I0318 13:10:31.344682       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:01.268403    2404 command_runner.go:130] ! I0318 13:10:31.345062       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:01.268403    2404 command_runner.go:130] ! I0318 13:10:31.345087       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:01.268403    2404 command_runner.go:130] ! I0318 13:10:41.357494       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:11:01.268454    2404 command_runner.go:130] ! I0318 13:10:41.357586       1 main.go:227] handling current node
	I0318 13:11:01.268495    2404 command_runner.go:130] ! I0318 13:10:41.357599       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:01.268538    2404 command_runner.go:130] ! I0318 13:10:41.357606       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:01.268538    2404 command_runner.go:130] ! I0318 13:10:41.357708       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:01.268538    2404 command_runner.go:130] ! I0318 13:10:41.357932       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:01.268538    2404 command_runner.go:130] ! I0318 13:10:51.367560       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:11:01.268595    2404 command_runner.go:130] ! I0318 13:10:51.367661       1 main.go:227] handling current node
	I0318 13:11:01.268595    2404 command_runner.go:130] ! I0318 13:10:51.367675       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:01.268595    2404 command_runner.go:130] ! I0318 13:10:51.367684       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:01.268595    2404 command_runner.go:130] ! I0318 13:10:51.367956       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:01.268646    2404 command_runner.go:130] ! I0318 13:10:51.368281       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:01.271026    2404 logs.go:123] Gathering logs for Docker ...
	I0318 13:11:01.271026    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:11:01.302972    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:11:01.303102    2404 command_runner.go:130] > Mar 18 13:08:19 minikube cri-dockerd[222]: time="2024-03-18T13:08:19Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:11:01.303241    2404 command_runner.go:130] > Mar 18 13:08:19 minikube cri-dockerd[222]: time="2024-03-18T13:08:19Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:19 minikube cri-dockerd[222]: time="2024-03-18T13:08:19Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:22 minikube cri-dockerd[412]: time="2024-03-18T13:08:22Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:22 minikube cri-dockerd[412]: time="2024-03-18T13:08:22Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:22 minikube cri-dockerd[412]: time="2024-03-18T13:08:22Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:24 minikube cri-dockerd[434]: time="2024-03-18T13:08:24Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:24 minikube cri-dockerd[434]: time="2024-03-18T13:08:24Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:24 minikube cri-dockerd[434]: time="2024-03-18T13:08:24Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0318 13:11:01.304003    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 systemd[1]: Starting Docker Application Container Engine...
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[662]: time="2024-03-18T13:09:08.926008208Z" level=info msg="Starting up"
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[662]: time="2024-03-18T13:09:08.927042019Z" level=info msg="containerd not running, starting managed containerd"
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[662]: time="2024-03-18T13:09:08.928263831Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.958180831Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.981644866Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.981729667Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.981890169Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.982007470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.982683977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.982866878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983040880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983180882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983201082Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983210682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983772288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.984603896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987157222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987245222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987380024Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987459025Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.988076231Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.988215332Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.988231932Z" level=info msg="metadata content store policy set" policy=shared
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994386894Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994536096Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994574296Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994587696Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994605296Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994669597Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995239203Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995378304Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995441205Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995564406Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995751508Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995819808Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995841009Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995857509Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995870509Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995903509Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995925809Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995942710Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995963610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995980410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996091811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305379    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996121511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996134612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996151212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996165012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996179412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996194912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996291913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996404914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996427114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996445915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996468515Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996497915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996538416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996560016Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997036721Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997287923Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997398924Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997518125Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.998045931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.998612736Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.998643637Z" level=info msg="NRI interface is disabled by configuration."
	I0318 13:11:01.306074    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999395544Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 13:11:01.306156    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999606346Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999683147Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999765648Z" level=info msg="containerd successfully booted in 0.044672s"
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:09 multinode-894400 dockerd[662]: time="2024-03-18T13:09:09.982989696Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.138351976Z" level=info msg="Loading containers: start."
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.545129368Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.626119356Z" level=info msg="Loading containers: done."
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.653541890Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.654242899Z" level=info msg="Daemon has completed initialization"
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.702026381Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 systemd[1]: Started Docker Application Container Engine.
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.704980317Z" level=info msg="API listen on [::]:2376"
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 systemd[1]: Stopping Docker Application Container Engine...
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.118112316Z" level=info msg="Processing signal 'terminated'"
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120561724Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120708425Z" level=info msg="Daemon shutdown complete"
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120817525Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120965826Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 systemd[1]: docker.service: Deactivated successfully.
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 systemd[1]: Stopped Docker Application Container Engine.
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 systemd[1]: Starting Docker Application Container Engine...
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:36.188961030Z" level=info msg="Starting up"
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:36.190214934Z" level=info msg="containerd not running, starting managed containerd"
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:36.191301438Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1058
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.220111635Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244480717Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244510717Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244539917Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 13:11:01.306776    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244552117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.306776    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244588817Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:01.306776    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244601217Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.306914    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244707818Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244791318Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244809418Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244818018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244838218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244975219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248195830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248302930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248446530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248548631Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248576331Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248593831Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248604331Z" level=info msg="metadata content store policy set" policy=shared
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.249888435Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.249971436Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.250624738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.250745538Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.250859739Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.251093339Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252590644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252685145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252703545Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252722945Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252736845Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252749745Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252793045Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252998846Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.307496    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253020946Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.307496    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253065546Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.307496    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253080846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253090746Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253177146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253201547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253215147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253229847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253243047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253257847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253270347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253284147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253297547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253313047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253331047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253344647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253357947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253374747Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253395147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253407847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253420947Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253503448Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253519848Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253532848Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253542748Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253613548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253652648Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253668048Z" level=info msg="NRI interface is disabled by configuration."
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254026949Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254474051Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254684152Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254775452Z" level=info msg="containerd successfully booted in 0.035926s"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.234846559Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.265734263Z" level=info msg="Loading containers: start."
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.543045299Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.620368360Z" level=info msg="Loading containers: done."
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.642056833Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.642227734Z" level=info msg="Daemon has completed initialization"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.686175082Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 systemd[1]: Started Docker Application Container Engine.
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.687135485Z" level=info msg="API listen on [::]:2376"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Loaded network plugin cni"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Docker Info: &{ID:5695bce5-a75b-48a7-87b1-d9b6b787473a Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2024-03-18T13:09:38.671342607Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00034fe30 NCPU:2 MemTotal:2216210432 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-894400 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Start cri-dockerd grpc backend"
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:43Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-456tm_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"d001e299e996b74f090c73f4e98605ef0d323a96826d23424e724ca8e7fe466a\""
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:43Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-5b5d89c9d6-c2997_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a23c1189be7c39f22c8a59ba41b198519894c9783ecad8dcda79fb8a76f05254\""
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791205184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791356085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791396985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791577685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838312843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838494344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838510044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838727044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951016023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951141424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951152624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951369125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/066206d4c52cb784fe7c2001b5e196c6e3521560c412808e8d9ddf742aa008e4/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.020194457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.020684858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.023241167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.023675469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.309332    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc7236a19957e321c1961c944824f2b4624bd7a289ab4ecefe33a08d4af88e2b/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:01.309389    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6fb3325d3c1005ffbbbfe7b136924ed5ff0c71db51f79a50f7179c108c238d47/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:01.309389    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/354f3c44a34fcbd9ca7826066f2b4c40b3a6551aa632389815c5d24e85e0472b/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:01.309389    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396374926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.310115    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396436126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.310432    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396447326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310624    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396626927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.467642467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.467879868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.468180469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.468559970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476573097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476618697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476631197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476702797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482324416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482501517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482648417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482918618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:48Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.545677603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.548609313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.548646013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.549168715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592129660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592185160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592195760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592280460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311206    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.615117337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.311206    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.615393238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.311206    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.615610139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311206    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.621669759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311206    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a9f21749669fe37a2d5bee9e7f45bf84eca6ae55870de8ac18bc774db7f81643/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:01.311206    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/41035eff3b7db97e56ba25be141bf644acea4ddcb9d92ba3b421e6fa855a09af/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:01.311717    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.995795822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.311822    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.995895422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.995916522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.996021523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/86d74dec812cfd4842de7473d45ff320bfb2283d09d2d05eee29e45cbde9f5e9/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171141514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171335814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171461415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171764216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.391481057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.391826158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.391990059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.393600364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1052]: time="2024-03-18T13:10:20.550892922Z" level=info msg="ignoring event" container=46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:20.551487227Z" level=info msg="shim disconnected" id=46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264 namespace=moby
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:20.551627628Z" level=warning msg="cleaning up after shim disconnected" id=46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264 namespace=moby
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:20.551639828Z" level=info msg="cleaning up dead shim" namespace=moby
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.200900512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.202882722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.203198024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.203763327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.250783392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.252016097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.252234698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.312419    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.252566299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.312419    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259013124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.312419    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259187125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.312591    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259204725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.312800    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259319625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:10:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/97583cc14f115cf8a4e90889b5f2beda90a81f97fd592e5e5acff8d35e305a59/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:10:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e20878b8092c291820adeb66f1b491dcef85c0699c57800cced7d3530d2a07fb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.818847676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.818997976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.819021476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.819463578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825706506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825766006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825780706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825864707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:56 multinode-894400 dockerd[1052]: 2024/03/18 13:10:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:56 multinode-894400 dockerd[1052]: 2024/03/18 13:10:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313465    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313465    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313465    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.344435    2404 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:11:01.344435    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:11:01.552318    2404 command_runner.go:130] > Name:               multinode-894400
	I0318 13:11:01.552442    2404 command_runner.go:130] > Roles:              control-plane
	I0318 13:11:01.552442    2404 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 13:11:01.552442    2404 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 13:11:01.552524    2404 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 13:11:01.552577    2404 command_runner.go:130] >                     kubernetes.io/hostname=multinode-894400
	I0318 13:11:01.552577    2404 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 13:11:01.552617    2404 command_runner.go:130] >                     minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	I0318 13:11:01.552671    2404 command_runner.go:130] >                     minikube.k8s.io/name=multinode-894400
	I0318 13:11:01.552671    2404 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0318 13:11:01.552727    2404 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_47_29_0700
	I0318 13:11:01.552727    2404 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 13:11:01.552727    2404 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0318 13:11:01.552788    2404 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0318 13:11:01.552788    2404 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 13:11:01.552788    2404 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 13:11:01.552855    2404 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 13:11:01.552855    2404 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:47:24 +0000
	I0318 13:11:01.552918    2404 command_runner.go:130] > Taints:             <none>
	I0318 13:11:01.552918    2404 command_runner.go:130] > Unschedulable:      false
	I0318 13:11:01.552918    2404 command_runner.go:130] > Lease:
	I0318 13:11:01.552981    2404 command_runner.go:130] >   HolderIdentity:  multinode-894400
	I0318 13:11:01.552981    2404 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 13:11:01.552981    2404 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 13:11:00 +0000
	I0318 13:11:01.552981    2404 command_runner.go:130] > Conditions:
	I0318 13:11:01.553043    2404 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0318 13:11:01.553094    2404 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0318 13:11:01.553231    2404 command_runner.go:130] >   MemoryPressure   False   Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0318 13:11:01.553231    2404 command_runner.go:130] >   DiskPressure     False   Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0318 13:11:01.553287    2404 command_runner.go:130] >   PIDPressure      False   Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0318 13:11:01.553338    2404 command_runner.go:130] >   Ready            True    Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 13:10:23 +0000   KubeletReady                 kubelet is posting ready status
	I0318 13:11:01.553338    2404 command_runner.go:130] > Addresses:
	I0318 13:11:01.553415    2404 command_runner.go:130] >   InternalIP:  172.30.130.156
	I0318 13:11:01.553415    2404 command_runner.go:130] >   Hostname:    multinode-894400
	I0318 13:11:01.553466    2404 command_runner.go:130] > Capacity:
	I0318 13:11:01.553466    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:01.553466    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:01.553466    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:01.553521    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:01.553571    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:01.553571    2404 command_runner.go:130] > Allocatable:
	I0318 13:11:01.553571    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:01.553627    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:01.553627    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:01.553627    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:01.553627    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:01.553677    2404 command_runner.go:130] > System Info:
	I0318 13:11:01.553677    2404 command_runner.go:130] >   Machine ID:                 80e7b822d2e94d26a09acd4a1bac452b
	I0318 13:11:01.553732    2404 command_runner.go:130] >   System UUID:                5c78c013-e4e8-1041-99c8-95cd760ef34f
	I0318 13:11:01.553732    2404 command_runner.go:130] >   Boot ID:                    a334ae39-1c10-417c-93ad-d28546d7793f
	I0318 13:11:01.553782    2404 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 13:11:01.553782    2404 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 13:11:01.553782    2404 command_runner.go:130] >   Operating System:           linux
	I0318 13:11:01.553837    2404 command_runner.go:130] >   Architecture:               amd64
	I0318 13:11:01.553837    2404 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 13:11:01.553887    2404 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 13:11:01.553887    2404 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 13:11:01.553887    2404 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0318 13:11:01.553981    2404 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0318 13:11:01.553981    2404 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0318 13:11:01.554033    2404 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 13:11:01.554091    2404 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0318 13:11:01.554091    2404 command_runner.go:130] >   default                     busybox-5b5d89c9d6-c2997                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0318 13:11:01.554144    2404 command_runner.go:130] >   kube-system                 coredns-5dd5756b68-456tm                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0318 13:11:01.554144    2404 command_runner.go:130] >   kube-system                 etcd-multinode-894400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         72s
	I0318 13:11:01.554203    2404 command_runner.go:130] >   kube-system                 kindnet-hhsxh                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0318 13:11:01.554255    2404 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-894400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         72s
	I0318 13:11:01.554315    2404 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-894400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	I0318 13:11:01.554390    2404 command_runner.go:130] >   kube-system                 kube-proxy-mc5tv                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0318 13:11:01.554390    2404 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-894400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	I0318 13:11:01.554447    2404 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0318 13:11:01.554447    2404 command_runner.go:130] > Allocated resources:
	I0318 13:11:01.554447    2404 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 13:11:01.554524    2404 command_runner.go:130] >   Resource           Requests     Limits
	I0318 13:11:01.554524    2404 command_runner.go:130] >   --------           --------     ------
	I0318 13:11:01.554524    2404 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0318 13:11:01.554584    2404 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0318 13:11:01.554584    2404 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0318 13:11:01.554584    2404 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0318 13:11:01.554584    2404 command_runner.go:130] > Events:
	I0318 13:11:01.554679    2404 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 13:11:01.554679    2404 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 13:11:01.554679    2404 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0318 13:11:01.554729    2404 command_runner.go:130] >   Normal  Starting                 70s                kube-proxy       
	I0318 13:11:01.554729    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	I0318 13:11:01.554779    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	I0318 13:11:01.554874    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	I0318 13:11:01.554932    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0318 13:11:01.554932    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m                kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	I0318 13:11:01.554993    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0318 13:11:01.555052    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m                kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	I0318 13:11:01.555104    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m                kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	I0318 13:11:01.555104    2404 command_runner.go:130] >   Normal  Starting                 23m                kubelet          Starting kubelet.
	I0318 13:11:01.555159    2404 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-894400 event: Registered Node multinode-894400 in Controller
	I0318 13:11:01.555212    2404 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-894400 status is now: NodeReady
	I0318 13:11:01.555212    2404 command_runner.go:130] >   Normal  Starting                 79s                kubelet          Starting kubelet.
	I0318 13:11:01.555287    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  78s (x8 over 79s)  kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	I0318 13:11:01.555322    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    78s (x8 over 79s)  kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	I0318 13:11:01.555363    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     78s (x7 over 79s)  kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	I0318 13:11:01.555415    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	I0318 13:11:01.555471    2404 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-894400 event: Registered Node multinode-894400 in Controller
	I0318 13:11:01.555471    2404 command_runner.go:130] > Name:               multinode-894400-m02
	I0318 13:11:01.555523    2404 command_runner.go:130] > Roles:              <none>
	I0318 13:11:01.555523    2404 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 13:11:01.555579    2404 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 13:11:01.555629    2404 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 13:11:01.555629    2404 command_runner.go:130] >                     kubernetes.io/hostname=multinode-894400-m02
	I0318 13:11:01.555691    2404 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 13:11:01.555691    2404 command_runner.go:130] >                     minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	I0318 13:11:01.555691    2404 command_runner.go:130] >                     minikube.k8s.io/name=multinode-894400
	I0318 13:11:01.555753    2404 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 13:11:01.555753    2404 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_50_35_0700
	I0318 13:11:01.555808    2404 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 13:11:01.555808    2404 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 13:11:01.555861    2404 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 13:11:01.555917    2404 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 13:11:01.555980    2404 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:50:34 +0000
	I0318 13:11:01.555980    2404 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 13:11:01.556034    2404 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 13:11:01.556034    2404 command_runner.go:130] > Unschedulable:      false
	I0318 13:11:01.556085    2404 command_runner.go:130] > Lease:
	I0318 13:11:01.556085    2404 command_runner.go:130] >   HolderIdentity:  multinode-894400-m02
	I0318 13:11:01.556085    2404 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 13:11:01.556142    2404 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 13:06:44 +0000
	I0318 13:11:01.556142    2404 command_runner.go:130] > Conditions:
	I0318 13:11:01.556195    2404 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 13:11:01.556250    2404 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 13:11:01.556250    2404 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:01.556301    2404 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:01.556357    2404 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:01.556357    2404 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:01.556408    2404 command_runner.go:130] > Addresses:
	I0318 13:11:01.556408    2404 command_runner.go:130] >   InternalIP:  172.30.140.66
	I0318 13:11:01.556464    2404 command_runner.go:130] >   Hostname:    multinode-894400-m02
	I0318 13:11:01.556464    2404 command_runner.go:130] > Capacity:
	I0318 13:11:01.556464    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:01.556515    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:01.556515    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:01.556574    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:01.556574    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:01.556574    2404 command_runner.go:130] > Allocatable:
	I0318 13:11:01.556627    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:01.556627    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:01.556627    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:01.556686    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:01.556686    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:01.556686    2404 command_runner.go:130] > System Info:
	I0318 13:11:01.556738    2404 command_runner.go:130] >   Machine ID:                 209753fe156d43e08ee40e815598ed17
	I0318 13:11:01.556738    2404 command_runner.go:130] >   System UUID:                fa19d46a-a3a2-9249-8c21-1edbfcedff01
	I0318 13:11:01.556797    2404 command_runner.go:130] >   Boot ID:                    0e15b7cf-29d6-40f7-ad78-fb04b10bea99
	I0318 13:11:01.556797    2404 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 13:11:01.556853    2404 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 13:11:01.556853    2404 command_runner.go:130] >   Operating System:           linux
	I0318 13:11:01.556853    2404 command_runner.go:130] >   Architecture:               amd64
	I0318 13:11:01.556911    2404 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 13:11:01.556911    2404 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 13:11:01.556961    2404 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 13:11:01.556961    2404 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0318 13:11:01.557018    2404 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0318 13:11:01.557018    2404 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0318 13:11:01.557070    2404 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 13:11:01.557070    2404 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0318 13:11:01.557128    2404 command_runner.go:130] >   default                     busybox-5b5d89c9d6-8btgf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0318 13:11:01.557180    2404 command_runner.go:130] >   kube-system                 kindnet-k5lpg               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0318 13:11:01.557180    2404 command_runner.go:130] >   kube-system                 kube-proxy-8bdmn            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0318 13:11:01.557237    2404 command_runner.go:130] > Allocated resources:
	I0318 13:11:01.557237    2404 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 13:11:01.557288    2404 command_runner.go:130] >   Resource           Requests   Limits
	I0318 13:11:01.557344    2404 command_runner.go:130] >   --------           --------   ------
	I0318 13:11:01.557344    2404 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0318 13:11:01.557397    2404 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0318 13:11:01.557397    2404 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0318 13:11:01.557397    2404 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0318 13:11:01.557397    2404 command_runner.go:130] > Events:
	I0318 13:11:01.557464    2404 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 13:11:01.557464    2404 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 13:11:01.557515    2404 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0318 13:11:01.557573    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x5 over 20m)  kubelet          Node multinode-894400-m02 status is now: NodeHasSufficientMemory
	I0318 13:11:01.557573    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x5 over 20m)  kubelet          Node multinode-894400-m02 status is now: NodeHasNoDiskPressure
	I0318 13:11:01.557625    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x5 over 20m)  kubelet          Node multinode-894400-m02 status is now: NodeHasSufficientPID
	I0318 13:11:01.557682    2404 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller
	I0318 13:11:01.557732    2404 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-894400-m02 status is now: NodeReady
	I0318 13:11:01.557732    2404 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller
	I0318 13:11:01.557788    2404 command_runner.go:130] >   Normal  NodeNotReady             20s                node-controller  Node multinode-894400-m02 status is now: NodeNotReady
	I0318 13:11:01.557839    2404 command_runner.go:130] > Name:               multinode-894400-m03
	I0318 13:11:01.557895    2404 command_runner.go:130] > Roles:              <none>
	I0318 13:11:01.557895    2404 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 13:11:01.557963    2404 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 13:11:01.557963    2404 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 13:11:01.558023    2404 command_runner.go:130] >                     kubernetes.io/hostname=multinode-894400-m03
	I0318 13:11:01.558023    2404 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 13:11:01.558077    2404 command_runner.go:130] >                     minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	I0318 13:11:01.558077    2404 command_runner.go:130] >                     minikube.k8s.io/name=multinode-894400
	I0318 13:11:01.558129    2404 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 13:11:01.558129    2404 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T13_05_26_0700
	I0318 13:11:01.558182    2404 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 13:11:01.558294    2404 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 13:11:01.558294    2404 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 13:11:01.558376    2404 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 13:11:01.558376    2404 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 13:05:25 +0000
	I0318 13:11:01.558434    2404 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 13:11:01.558434    2404 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 13:11:01.558434    2404 command_runner.go:130] > Unschedulable:      false
	I0318 13:11:01.558434    2404 command_runner.go:130] > Lease:
	I0318 13:11:01.558434    2404 command_runner.go:130] >   HolderIdentity:  multinode-894400-m03
	I0318 13:11:01.558434    2404 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 13:11:01.558434    2404 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 13:06:27 +0000
	I0318 13:11:01.558434    2404 command_runner.go:130] > Conditions:
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 13:11:01.558434    2404 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 13:11:01.558434    2404 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:01.558434    2404 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:01.558434    2404 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:01.558434    2404 command_runner.go:130] > Addresses:
	I0318 13:11:01.558434    2404 command_runner.go:130] >   InternalIP:  172.30.137.140
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Hostname:    multinode-894400-m03
	I0318 13:11:01.558434    2404 command_runner.go:130] > Capacity:
	I0318 13:11:01.558434    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:01.558434    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:01.558434    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:01.558434    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:01.558434    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:01.558434    2404 command_runner.go:130] > Allocatable:
	I0318 13:11:01.558434    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:01.558434    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:01.558434    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:01.558434    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:01.558434    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:01.558434    2404 command_runner.go:130] > System Info:
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Machine ID:                 f96e7421441b46c0a5836e2d53b26708
	I0318 13:11:01.558434    2404 command_runner.go:130] >   System UUID:                7dae14c5-92ae-d842-8ce6-c446c0352eb2
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Boot ID:                    7ef4b157-1893-48d2-9b87-d5f210c11477
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 13:11:01.558434    2404 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Operating System:           linux
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Architecture:               amd64
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 13:11:01.558434    2404 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0318 13:11:01.558434    2404 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0318 13:11:01.558434    2404 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0318 13:11:01.558976    2404 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 13:11:01.559069    2404 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0318 13:11:01.559069    2404 command_runner.go:130] >   kube-system                 kindnet-zv9tv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	I0318 13:11:01.559069    2404 command_runner.go:130] >   kube-system                 kube-proxy-745w9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	I0318 13:11:01.559157    2404 command_runner.go:130] > Allocated resources:
	I0318 13:11:01.559204    2404 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Resource           Requests   Limits
	I0318 13:11:01.559204    2404 command_runner.go:130] >   --------           --------   ------
	I0318 13:11:01.559204    2404 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0318 13:11:01.559204    2404 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0318 13:11:01.559204    2404 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0318 13:11:01.559204    2404 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0318 13:11:01.559204    2404 command_runner.go:130] > Events:
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0318 13:11:01.559204    2404 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  Starting                 5m33s                  kube-proxy       
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  15m (x5 over 15m)      kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientMemory
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    15m (x5 over 15m)      kubelet          Node multinode-894400-m03 status is now: NodeHasNoDiskPressure
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     15m (x5 over 15m)      kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientPID
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-894400-m03 status is now: NodeReady
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  Starting                 5m36s                  kubelet          Starting kubelet.
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m36s (x2 over 5m36s)  kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientMemory
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m36s (x2 over 5m36s)  kubelet          Node multinode-894400-m03 status is now: NodeHasNoDiskPressure
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m36s (x2 over 5m36s)  kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientPID
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m36s                  kubelet          Updated Node Allocatable limit across pods
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  RegisteredNode           5m35s                  node-controller  Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  NodeReady                5m27s                  kubelet          Node multinode-894400-m03 status is now: NodeReady
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  NodeNotReady             3m50s                  node-controller  Node multinode-894400-m03 status is now: NodeNotReady
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  RegisteredNode           60s                    node-controller  Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller
	I0318 13:11:01.569484    2404 logs.go:123] Gathering logs for coredns [693a64f7472f] ...
	I0318 13:11:01.569484    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 693a64f7472f"
	I0318 13:11:01.596416    2404 command_runner.go:130] > .:53
	I0318 13:11:01.596416    2404 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fa2b120b3007aba3ef70c07d73473d14986404bfc5683cceeb89e8950d5ace894afca7d43365324975f99d1b2da3d6473c722fe82795d39a4ee8b4b969b7aa71
	I0318 13:11:01.596416    2404 command_runner.go:130] > CoreDNS-1.10.1
	I0318 13:11:01.596416    2404 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 13:11:01.596416    2404 command_runner.go:130] > [INFO] 127.0.0.1:33426 - 38858 "HINFO IN 7345450223813584863.4065419873971828575. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030234917s
	I0318 13:11:01.596622    2404 command_runner.go:130] > [INFO] 10.244.1.2:56777 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000311303s
	I0318 13:11:01.596622    2404 command_runner.go:130] > [INFO] 10.244.1.2:58024 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.098073876s
	I0318 13:11:01.596659    2404 command_runner.go:130] > [INFO] 10.244.1.2:57941 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.154978742s
	I0318 13:11:01.596659    2404 command_runner.go:130] > [INFO] 10.244.1.2:42576 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.156414777s
	I0318 13:11:01.596659    2404 command_runner.go:130] > [INFO] 10.244.0.3:43391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152802s
	I0318 13:11:01.596700    2404 command_runner.go:130] > [INFO] 10.244.0.3:52523 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000121101s
	I0318 13:11:01.596700    2404 command_runner.go:130] > [INFO] 10.244.0.3:36187 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000058401s
	I0318 13:11:01.596755    2404 command_runner.go:130] > [INFO] 10.244.0.3:33451 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000055s
	I0318 13:11:01.596755    2404 command_runner.go:130] > [INFO] 10.244.1.2:42180 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097901s
	I0318 13:11:01.596755    2404 command_runner.go:130] > [INFO] 10.244.1.2:60616 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.142731308s
	I0318 13:11:01.596807    2404 command_runner.go:130] > [INFO] 10.244.1.2:45190 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152502s
	I0318 13:11:01.596807    2404 command_runner.go:130] > [INFO] 10.244.1.2:55984 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150102s
	I0318 13:11:01.596807    2404 command_runner.go:130] > [INFO] 10.244.1.2:47725 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037970075s
	I0318 13:11:01.596807    2404 command_runner.go:130] > [INFO] 10.244.1.2:55620 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104901s
	I0318 13:11:01.596880    2404 command_runner.go:130] > [INFO] 10.244.1.2:60349 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000189802s
	I0318 13:11:01.596880    2404 command_runner.go:130] > [INFO] 10.244.1.2:44081 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089501s
	I0318 13:11:01.596880    2404 command_runner.go:130] > [INFO] 10.244.0.3:52580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182502s
	I0318 13:11:01.596923    2404 command_runner.go:130] > [INFO] 10.244.0.3:60982 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000727s
	I0318 13:11:01.596923    2404 command_runner.go:130] > [INFO] 10.244.0.3:53685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081s
	I0318 13:11:01.596963    2404 command_runner.go:130] > [INFO] 10.244.0.3:38117 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127701s
	I0318 13:11:01.596963    2404 command_runner.go:130] > [INFO] 10.244.0.3:38455 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000117101s
	I0318 13:11:01.596963    2404 command_runner.go:130] > [INFO] 10.244.0.3:50629 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121702s
	I0318 13:11:01.596963    2404 command_runner.go:130] > [INFO] 10.244.0.3:33301 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000487s
	I0318 13:11:01.597023    2404 command_runner.go:130] > [INFO] 10.244.0.3:38091 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138402s
	I0318 13:11:01.597023    2404 command_runner.go:130] > [INFO] 10.244.1.2:43364 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192902s
	I0318 13:11:01.597023    2404 command_runner.go:130] > [INFO] 10.244.1.2:42609 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060701s
	I0318 13:11:01.597067    2404 command_runner.go:130] > [INFO] 10.244.1.2:36443 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051301s
	I0318 13:11:01.597067    2404 command_runner.go:130] > [INFO] 10.244.1.2:56414 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000526s
	I0318 13:11:01.597117    2404 command_runner.go:130] > [INFO] 10.244.0.3:50774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137201s
	I0318 13:11:01.597117    2404 command_runner.go:130] > [INFO] 10.244.0.3:43237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000196902s
	I0318 13:11:01.597170    2404 command_runner.go:130] > [INFO] 10.244.0.3:38831 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059901s
	I0318 13:11:01.597170    2404 command_runner.go:130] > [INFO] 10.244.0.3:56163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122801s
	I0318 13:11:01.597218    2404 command_runner.go:130] > [INFO] 10.244.1.2:58305 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209602s
	I0318 13:11:01.597218    2404 command_runner.go:130] > [INFO] 10.244.1.2:58291 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000151202s
	I0318 13:11:01.597218    2404 command_runner.go:130] > [INFO] 10.244.1.2:33227 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184302s
	I0318 13:11:01.597254    2404 command_runner.go:130] > [INFO] 10.244.1.2:58179 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152102s
	I0318 13:11:01.597254    2404 command_runner.go:130] > [INFO] 10.244.0.3:46943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104101s
	I0318 13:11:01.597301    2404 command_runner.go:130] > [INFO] 10.244.0.3:58018 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000107001s
	I0318 13:11:01.597301    2404 command_runner.go:130] > [INFO] 10.244.0.3:35353 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119601s
	I0318 13:11:01.597339    2404 command_runner.go:130] > [INFO] 10.244.0.3:58763 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075701s
	I0318 13:11:01.597371    2404 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0318 13:11:01.597371    2404 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0318 13:11:04.113958    2404 api_server.go:253] Checking apiserver healthz at https://172.30.130.156:8443/healthz ...
	I0318 13:11:04.121944    2404 api_server.go:279] https://172.30.130.156:8443/healthz returned 200:
	ok
	I0318 13:11:04.122877    2404 round_trippers.go:463] GET https://172.30.130.156:8443/version
	I0318 13:11:04.122877    2404 round_trippers.go:469] Request Headers:
	I0318 13:11:04.122877    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:11:04.122877    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:11:04.124708    2404 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 13:11:04.124708    2404 round_trippers.go:577] Response Headers:
	I0318 13:11:04.124708    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:11:04.124708    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:11:04.124708    2404 round_trippers.go:580]     Content-Length: 264
	I0318 13:11:04.124708    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:11:04 GMT
	I0318 13:11:04.124708    2404 round_trippers.go:580]     Audit-Id: 44b1e23c-1635-4a1d-9fb9-f0a092479146
	I0318 13:11:04.125116    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:11:04.125116    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:11:04.125260    2404 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0318 13:11:04.125463    2404 api_server.go:141] control plane version: v1.28.4
	I0318 13:11:04.125463    2404 api_server.go:131] duration metric: took 3.717492s to wait for apiserver health ...
	I0318 13:11:04.125520    2404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:11:04.135764    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:11:04.159208    2404 command_runner.go:130] > fc4430c7fa20
	I0318 13:11:04.160008    2404 logs.go:276] 1 containers: [fc4430c7fa20]
	I0318 13:11:04.168360    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:11:04.196932    2404 command_runner.go:130] > 5f0887d1e691
	I0318 13:11:04.197216    2404 logs.go:276] 1 containers: [5f0887d1e691]
	I0318 13:11:04.206633    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:11:04.233460    2404 command_runner.go:130] > 3c3bc988c74c
	I0318 13:11:04.233460    2404 command_runner.go:130] > 693a64f7472f
	I0318 13:11:04.233848    2404 logs.go:276] 2 containers: [3c3bc988c74c 693a64f7472f]
	I0318 13:11:04.243192    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:11:04.266224    2404 command_runner.go:130] > 66ee8be9fada
	I0318 13:11:04.266505    2404 command_runner.go:130] > e4d42739ce0e
	I0318 13:11:04.266505    2404 logs.go:276] 2 containers: [66ee8be9fada e4d42739ce0e]
	I0318 13:11:04.274798    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:11:04.294660    2404 command_runner.go:130] > 163ccabc3882
	I0318 13:11:04.294660    2404 command_runner.go:130] > 9335855aab63
	I0318 13:11:04.294660    2404 logs.go:276] 2 containers: [163ccabc3882 9335855aab63]
	I0318 13:11:04.304214    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:11:04.330742    2404 command_runner.go:130] > 4ad6784a187d
	I0318 13:11:04.330742    2404 command_runner.go:130] > 7aa5cf4ec378
	I0318 13:11:04.330742    2404 logs.go:276] 2 containers: [4ad6784a187d 7aa5cf4ec378]
	I0318 13:11:04.340803    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:11:04.365255    2404 command_runner.go:130] > c8e5ec25e910
	I0318 13:11:04.365255    2404 command_runner.go:130] > c4d7018ad23a
	I0318 13:11:04.365255    2404 logs.go:276] 2 containers: [c8e5ec25e910 c4d7018ad23a]
	I0318 13:11:04.365255    2404 logs.go:123] Gathering logs for kube-apiserver [fc4430c7fa20] ...
	I0318 13:11:04.365255    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc4430c7fa20"
	I0318 13:11:04.389675    2404 command_runner.go:130] ! I0318 13:09:45.117348       1 options.go:220] external host was not specified, using 172.30.130.156
	I0318 13:11:04.389675    2404 command_runner.go:130] ! I0318 13:09:45.120803       1 server.go:148] Version: v1.28.4
	I0318 13:11:04.389675    2404 command_runner.go:130] ! I0318 13:09:45.120988       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:04.389756    2404 command_runner.go:130] ! I0318 13:09:45.770080       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0318 13:11:04.389756    2404 command_runner.go:130] ! I0318 13:09:45.795010       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0318 13:11:04.389819    2404 command_runner.go:130] ! I0318 13:09:45.795318       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0318 13:11:04.389858    2404 command_runner.go:130] ! I0318 13:09:45.795878       1 instance.go:298] Using reconciler: lease
	I0318 13:11:04.389858    2404 command_runner.go:130] ! I0318 13:09:46.836486       1 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0318 13:11:04.389858    2404 command_runner.go:130] ! W0318 13:09:46.836605       1 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.389924    2404 command_runner.go:130] ! I0318 13:09:47.074638       1 handler.go:232] Adding GroupVersion  v1 to ResourceManager
	I0318 13:11:04.389924    2404 command_runner.go:130] ! I0318 13:09:47.074978       1 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0318 13:11:04.389924    2404 command_runner.go:130] ! I0318 13:09:47.452713       1 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
	I0318 13:11:04.389924    2404 command_runner.go:130] ! I0318 13:09:47.465860       1 handler.go:232] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0318 13:11:04.389924    2404 command_runner.go:130] ! W0318 13:09:47.465973       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390003    2404 command_runner.go:130] ! W0318 13:09:47.465981       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:04.390033    2404 command_runner.go:130] ! I0318 13:09:47.466706       1 handler.go:232] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0318 13:11:04.390033    2404 command_runner.go:130] ! W0318 13:09:47.466787       1 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390066    2404 command_runner.go:130] ! I0318 13:09:47.467862       1 handler.go:232] Adding GroupVersion autoscaling v2 to ResourceManager
	I0318 13:11:04.390066    2404 command_runner.go:130] ! I0318 13:09:47.468840       1 handler.go:232] Adding GroupVersion autoscaling v1 to ResourceManager
	I0318 13:11:04.390120    2404 command_runner.go:130] ! W0318 13:09:47.468926       1 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
	I0318 13:11:04.390148    2404 command_runner.go:130] ! W0318 13:09:47.468934       1 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
	I0318 13:11:04.390148    2404 command_runner.go:130] ! I0318 13:09:47.470928       1 handler.go:232] Adding GroupVersion batch v1 to ResourceManager
	I0318 13:11:04.390148    2404 command_runner.go:130] ! W0318 13:09:47.471074       1 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
	I0318 13:11:04.390148    2404 command_runner.go:130] ! I0318 13:09:47.472121       1 handler.go:232] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0318 13:11:04.390235    2404 command_runner.go:130] ! W0318 13:09:47.472195       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390235    2404 command_runner.go:130] ! W0318 13:09:47.472202       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:04.390266    2404 command_runner.go:130] ! I0318 13:09:47.472773       1 handler.go:232] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0318 13:11:04.390266    2404 command_runner.go:130] ! W0318 13:09:47.472852       1 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390322    2404 command_runner.go:130] ! W0318 13:09:47.472898       1 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390349    2404 command_runner.go:130] ! I0318 13:09:47.473727       1 handler.go:232] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0318 13:11:04.390349    2404 command_runner.go:130] ! I0318 13:09:47.476475       1 handler.go:232] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0318 13:11:04.390349    2404 command_runner.go:130] ! W0318 13:09:47.476612       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390349    2404 command_runner.go:130] ! W0318 13:09:47.476620       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:04.390433    2404 command_runner.go:130] ! I0318 13:09:47.477234       1 handler.go:232] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0318 13:11:04.390433    2404 command_runner.go:130] ! W0318 13:09:47.477314       1 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390464    2404 command_runner.go:130] ! W0318 13:09:47.477321       1 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:04.390464    2404 command_runner.go:130] ! I0318 13:09:47.478143       1 handler.go:232] Adding GroupVersion policy v1 to ResourceManager
	I0318 13:11:04.390499    2404 command_runner.go:130] ! W0318 13:09:47.478217       1 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
	I0318 13:11:04.390499    2404 command_runner.go:130] ! I0318 13:09:47.480195       1 handler.go:232] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0318 13:11:04.390526    2404 command_runner.go:130] ! W0318 13:09:47.480271       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390526    2404 command_runner.go:130] ! W0318 13:09:47.480279       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:04.390582    2404 command_runner.go:130] ! I0318 13:09:47.480731       1 handler.go:232] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0318 13:11:04.390582    2404 command_runner.go:130] ! W0318 13:09:47.480812       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390611    2404 command_runner.go:130] ! W0318 13:09:47.480819       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:04.390611    2404 command_runner.go:130] ! I0318 13:09:47.493837       1 handler.go:232] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0318 13:11:04.390642    2404 command_runner.go:130] ! W0318 13:09:47.494098       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390642    2404 command_runner.go:130] ! W0318 13:09:47.494198       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:04.390678    2404 command_runner.go:130] ! I0318 13:09:47.499689       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0318 13:11:04.390678    2404 command_runner.go:130] ! I0318 13:09:47.506631       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
	I0318 13:11:04.390705    2404 command_runner.go:130] ! W0318 13:09:47.506664       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390705    2404 command_runner.go:130] ! W0318 13:09:47.506671       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:04.390705    2404 command_runner.go:130] ! I0318 13:09:47.512288       1 handler.go:232] Adding GroupVersion apps v1 to ResourceManager
	I0318 13:11:04.390705    2404 command_runner.go:130] ! W0318 13:09:47.512371       1 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
	I0318 13:11:04.390758    2404 command_runner.go:130] ! W0318 13:09:47.512378       1 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
	I0318 13:11:04.390788    2404 command_runner.go:130] ! I0318 13:09:47.513443       1 handler.go:232] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0318 13:11:04.390819    2404 command_runner.go:130] ! W0318 13:09:47.513547       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390819    2404 command_runner.go:130] ! W0318 13:09:47.513557       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:04.390819    2404 command_runner.go:130] ! I0318 13:09:47.514339       1 handler.go:232] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0318 13:11:04.390855    2404 command_runner.go:130] ! W0318 13:09:47.514435       1 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390881    2404 command_runner.go:130] ! I0318 13:09:47.536002       1 handler.go:232] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0318 13:11:04.390881    2404 command_runner.go:130] ! W0318 13:09:47.536061       1 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390881    2404 command_runner.go:130] ! I0318 13:09:48.221475       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:11:04.390881    2404 command_runner.go:130] ! I0318 13:09:48.221960       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:11:04.390985    2404 command_runner.go:130] ! I0318 13:09:48.222438       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0318 13:11:04.390985    2404 command_runner.go:130] ! I0318 13:09:48.222942       1 secure_serving.go:213] Serving securely on [::]:8443
	I0318 13:11:04.391017    2404 command_runner.go:130] ! I0318 13:09:48.223022       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:11:04.391017    2404 command_runner.go:130] ! I0318 13:09:48.223440       1 controller.go:78] Starting OpenAPI AggregationController
	I0318 13:11:04.391055    2404 command_runner.go:130] ! I0318 13:09:48.224862       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 13:11:04.391083    2404 command_runner.go:130] ! I0318 13:09:48.225271       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0318 13:11:04.391083    2404 command_runner.go:130] ! I0318 13:09:48.225417       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0318 13:11:04.391083    2404 command_runner.go:130] ! I0318 13:09:48.225564       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0318 13:11:04.391083    2404 command_runner.go:130] ! I0318 13:09:48.228940       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 13:11:04.391143    2404 command_runner.go:130] ! I0318 13:09:48.229462       1 controller.go:116] Starting legacy_token_tracking_controller
	I0318 13:11:04.391143    2404 command_runner.go:130] ! I0318 13:09:48.229644       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0318 13:11:04.391172    2404 command_runner.go:130] ! I0318 13:09:48.230522       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0318 13:11:04.391172    2404 command_runner.go:130] ! I0318 13:09:48.230832       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0318 13:11:04.391202    2404 command_runner.go:130] ! I0318 13:09:48.231097       1 aggregator.go:164] waiting for initial CRD sync...
	I0318 13:11:04.391202    2404 command_runner.go:130] ! I0318 13:09:48.231395       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0318 13:11:04.391239    2404 command_runner.go:130] ! I0318 13:09:48.231642       1 available_controller.go:423] Starting AvailableConditionController
	I0318 13:11:04.391267    2404 command_runner.go:130] ! I0318 13:09:48.231846       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0318 13:11:04.391267    2404 command_runner.go:130] ! I0318 13:09:48.232024       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0318 13:11:04.391267    2404 command_runner.go:130] ! I0318 13:09:48.232223       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0318 13:11:04.391324    2404 command_runner.go:130] ! I0318 13:09:48.232638       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0318 13:11:04.391324    2404 command_runner.go:130] ! I0318 13:09:48.233228       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:11:04.391353    2404 command_runner.go:130] ! I0318 13:09:48.233501       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:11:04.391383    2404 command_runner.go:130] ! I0318 13:09:48.242598       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0318 13:11:04.391383    2404 command_runner.go:130] ! I0318 13:09:48.242850       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0318 13:11:04.391421    2404 command_runner.go:130] ! I0318 13:09:48.243085       1 controller.go:134] Starting OpenAPI controller
	I0318 13:11:04.391421    2404 command_runner.go:130] ! I0318 13:09:48.243289       1 controller.go:85] Starting OpenAPI V3 controller
	I0318 13:11:04.391421    2404 command_runner.go:130] ! I0318 13:09:48.243558       1 naming_controller.go:291] Starting NamingConditionController
	I0318 13:11:04.391464    2404 command_runner.go:130] ! I0318 13:09:48.243852       1 establishing_controller.go:76] Starting EstablishingController
	I0318 13:11:04.391464    2404 command_runner.go:130] ! I0318 13:09:48.244899       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0318 13:11:04.391520    2404 command_runner.go:130] ! I0318 13:09:48.245178       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 13:11:04.391520    2404 command_runner.go:130] ! I0318 13:09:48.245796       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 13:11:04.391548    2404 command_runner.go:130] ! I0318 13:09:48.231958       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0318 13:11:04.391578    2404 command_runner.go:130] ! I0318 13:09:48.403749       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 13:11:04.391578    2404 command_runner.go:130] ! I0318 13:09:48.426183       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 13:11:04.391613    2404 command_runner.go:130] ! I0318 13:09:48.426213       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 13:11:04.391613    2404 command_runner.go:130] ! I0318 13:09:48.426382       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 13:11:04.391641    2404 command_runner.go:130] ! I0318 13:09:48.432175       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 13:11:04.391641    2404 command_runner.go:130] ! I0318 13:09:48.433073       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 13:11:04.391641    2404 command_runner.go:130] ! I0318 13:09:48.433297       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 13:11:04.391641    2404 command_runner.go:130] ! I0318 13:09:48.444484       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 13:11:04.391699    2404 command_runner.go:130] ! I0318 13:09:48.444708       1 aggregator.go:166] initial CRD sync complete...
	I0318 13:11:04.391699    2404 command_runner.go:130] ! I0318 13:09:48.444961       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 13:11:04.391699    2404 command_runner.go:130] ! I0318 13:09:48.445263       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 13:11:04.391699    2404 command_runner.go:130] ! I0318 13:09:48.446443       1 cache.go:39] Caches are synced for autoregister controller
	I0318 13:11:04.391772    2404 command_runner.go:130] ! I0318 13:09:48.471536       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 13:11:04.391772    2404 command_runner.go:130] ! I0318 13:09:49.257477       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 13:11:04.391819    2404 command_runner.go:130] ! W0318 13:09:49.806994       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.30.130.156]
	I0318 13:11:04.391819    2404 command_runner.go:130] ! I0318 13:09:49.809655       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 13:11:04.391819    2404 command_runner.go:130] ! I0318 13:09:49.821460       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 13:11:04.391819    2404 command_runner.go:130] ! I0318 13:09:51.622752       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 13:11:04.391897    2404 command_runner.go:130] ! I0318 13:09:51.799195       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 13:11:04.391897    2404 command_runner.go:130] ! I0318 13:09:51.812022       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 13:11:04.391953    2404 command_runner.go:130] ! I0318 13:09:51.930541       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 13:11:04.391953    2404 command_runner.go:130] ! I0318 13:09:51.942099       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 13:11:04.400075    2404 logs.go:123] Gathering logs for kube-scheduler [66ee8be9fada] ...
	I0318 13:11:04.400075    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ee8be9fada"
	I0318 13:11:04.427558    2404 command_runner.go:130] ! I0318 13:09:45.699415       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:11:04.427558    2404 command_runner.go:130] ! W0318 13:09:48.342100       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 13:11:04.427558    2404 command_runner.go:130] ! W0318 13:09:48.342243       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:04.427558    2404 command_runner.go:130] ! W0318 13:09:48.342324       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 13:11:04.427558    2404 command_runner.go:130] ! W0318 13:09:48.342374       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:11:04.427558    2404 command_runner.go:130] ! I0318 13:09:48.402495       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 13:11:04.427558    2404 command_runner.go:130] ! I0318 13:09:48.402540       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:04.427558    2404 command_runner.go:130] ! I0318 13:09:48.407228       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:11:04.427558    2404 command_runner.go:130] ! I0318 13:09:48.409117       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:11:04.427558    2404 command_runner.go:130] ! I0318 13:09:48.410197       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 13:11:04.427558    2404 command_runner.go:130] ! I0318 13:09:48.410738       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:11:04.427558    2404 command_runner.go:130] ! I0318 13:09:48.510577       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:11:04.429032    2404 logs.go:123] Gathering logs for Docker ...
	I0318 13:11:04.429032    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:11:04.462635    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:11:04.463174    2404 command_runner.go:130] > Mar 18 13:08:19 minikube cri-dockerd[222]: time="2024-03-18T13:08:19Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:11:04.463174    2404 command_runner.go:130] > Mar 18 13:08:19 minikube cri-dockerd[222]: time="2024-03-18T13:08:19Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:11:04.463174    2404 command_runner.go:130] > Mar 18 13:08:19 minikube cri-dockerd[222]: time="2024-03-18T13:08:19Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 13:11:04.463174    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:11:04.463174    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:11:04.463174    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:11:04.463174    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0318 13:11:04.463299    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 13:11:04.463299    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:11:04.463299    2404 command_runner.go:130] > Mar 18 13:08:22 minikube cri-dockerd[412]: time="2024-03-18T13:08:22Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:11:04.463299    2404 command_runner.go:130] > Mar 18 13:08:22 minikube cri-dockerd[412]: time="2024-03-18T13:08:22Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:11:04.463299    2404 command_runner.go:130] > Mar 18 13:08:22 minikube cri-dockerd[412]: time="2024-03-18T13:08:22Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 13:11:04.463299    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:11:04.463423    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:11:04.463423    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:11:04.463423    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0318 13:11:04.463499    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 13:11:04.463499    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:11:04.463499    2404 command_runner.go:130] > Mar 18 13:08:24 minikube cri-dockerd[434]: time="2024-03-18T13:08:24Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:11:04.463499    2404 command_runner.go:130] > Mar 18 13:08:24 minikube cri-dockerd[434]: time="2024-03-18T13:08:24Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:11:04.463499    2404 command_runner.go:130] > Mar 18 13:08:24 minikube cri-dockerd[434]: time="2024-03-18T13:08:24Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 13:11:04.463499    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:11:04.463499    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 systemd[1]: Starting Docker Application Container Engine...
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[662]: time="2024-03-18T13:09:08.926008208Z" level=info msg="Starting up"
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[662]: time="2024-03-18T13:09:08.927042019Z" level=info msg="containerd not running, starting managed containerd"
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[662]: time="2024-03-18T13:09:08.928263831Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.958180831Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.981644866Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.981729667Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.981890169Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.982007470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.982683977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.982866878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983040880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983180882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.464178    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983201082Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 13:11:04.464178    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983210682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.464178    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983772288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.464178    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.984603896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.464178    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987157222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:04.464178    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987245222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.464178    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987380024Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:04.464419    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987459025Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 13:11:04.464419    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.988076231Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 13:11:04.464419    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.988215332Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 13:11:04.464553    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.988231932Z" level=info msg="metadata content store policy set" policy=shared
	I0318 13:11:04.464553    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994386894Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 13:11:04.464553    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994536096Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 13:11:04.464553    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994574296Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 13:11:04.464706    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994587696Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 13:11:04.464706    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994605296Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 13:11:04.464706    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994669597Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 13:11:04.464706    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995239203Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 13:11:04.464799    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995378304Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 13:11:04.464799    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995441205Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 13:11:04.464799    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995564406Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 13:11:04.464799    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995751508Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.464799    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995819808Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.464973    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995841009Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.464973    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995857509Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.464973    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995870509Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.464973    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995903509Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.465099    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995925809Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.465099    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995942710Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.465129    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995963610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465193    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995980410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465193    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996091811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465193    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996121511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465193    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996134612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465257    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996151212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465297    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996165012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465332    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996179412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465377    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996194912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465377    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996291913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465415    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996404914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465415    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996427114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465415    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996445915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465511    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996468515Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 13:11:04.465511    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996497915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465511    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996538416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465562    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996560016Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 13:11:04.465562    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997036721Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 13:11:04.465610    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997287923Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 13:11:04.465610    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997398924Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 13:11:04.465610    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997518125Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 13:11:04.465689    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.998045931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465725    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.998612736Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 13:11:04.465755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.998643637Z" level=info msg="NRI interface is disabled by configuration."
	I0318 13:11:04.465755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999395544Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 13:11:04.465820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999606346Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 13:11:04.465861    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999683147Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 13:11:04.465861    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999765648Z" level=info msg="containerd successfully booted in 0.044672s"
	I0318 13:11:04.465861    2404 command_runner.go:130] > Mar 18 13:09:09 multinode-894400 dockerd[662]: time="2024-03-18T13:09:09.982989696Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 13:11:04.465897    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.138351976Z" level=info msg="Loading containers: start."
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.545129368Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.626119356Z" level=info msg="Loading containers: done."
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.653541890Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.654242899Z" level=info msg="Daemon has completed initialization"
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.702026381Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 systemd[1]: Started Docker Application Container Engine.
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.704980317Z" level=info msg="API listen on [::]:2376"
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 systemd[1]: Stopping Docker Application Container Engine...
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.118112316Z" level=info msg="Processing signal 'terminated'"
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120561724Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120708425Z" level=info msg="Daemon shutdown complete"
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120817525Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120965826Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 systemd[1]: docker.service: Deactivated successfully.
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 systemd[1]: Stopped Docker Application Container Engine.
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 systemd[1]: Starting Docker Application Container Engine...
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:36.188961030Z" level=info msg="Starting up"
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:36.190214934Z" level=info msg="containerd not running, starting managed containerd"
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:36.191301438Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1058
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.220111635Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244480717Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244510717Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244539917Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244552117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244588817Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244601217Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.466475    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244707818Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:04.466475    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244791318Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.466475    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244809418Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 13:11:04.466475    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244818018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.466475    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244838218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.466475    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244975219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.466612    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248195830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:04.466612    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248302930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.466659    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248446530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:04.466699    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248548631Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 13:11:04.466751    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248576331Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 13:11:04.466751    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248593831Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 13:11:04.466751    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248604331Z" level=info msg="metadata content store policy set" policy=shared
	I0318 13:11:04.466751    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.249888435Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 13:11:04.466829    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.249971436Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 13:11:04.466829    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.250624738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 13:11:04.466829    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.250745538Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 13:11:04.466829    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.250859739Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 13:11:04.466829    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.251093339Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 13:11:04.466829    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252590644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 13:11:04.466942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252685145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 13:11:04.466942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252703545Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 13:11:04.466942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252722945Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 13:11:04.466942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252736845Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.467049    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252749745Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.467049    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252793045Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.467049    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252998846Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.467049    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253020946Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.467049    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253065546Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.467151    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253080846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.467151    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253090746Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.467151    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253177146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467151    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253201547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467246    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253215147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467246    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253229847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467246    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253243047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467348    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253257847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467348    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253270347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467448    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253284147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467448    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253297547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467448    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253313047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467448    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253331047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467563    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253344647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467563    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253357947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467563    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253374747Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 13:11:04.467563    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253395147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467768    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253407847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467768    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253420947Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 13:11:04.467768    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253503448Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 13:11:04.467881    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253519848Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 13:11:04.467881    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253532848Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 13:11:04.467881    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253542748Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 13:11:04.467881    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253613548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467881    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253652648Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 13:11:04.467881    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253668048Z" level=info msg="NRI interface is disabled by configuration."
	I0318 13:11:04.467881    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254026949Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254474051Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254684152Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254775452Z" level=info msg="containerd successfully booted in 0.035926s"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.234846559Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.265734263Z" level=info msg="Loading containers: start."
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.543045299Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.620368360Z" level=info msg="Loading containers: done."
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.642056833Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.642227734Z" level=info msg="Daemon has completed initialization"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.686175082Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 systemd[1]: Started Docker Application Container Engine.
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.687135485Z" level=info msg="API listen on [::]:2376"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Loaded network plugin cni"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Docker Info: &{ID:5695bce5-a75b-48a7-87b1-d9b6b787473a Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2024-03-18T13:09:38.671342607Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00034fe30 NCPU:2 MemTotal:2216210432 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-894400 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Start cri-dockerd grpc backend"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0318 13:11:04.468637    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:43Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-456tm_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"d001e299e996b74f090c73f4e98605ef0d323a96826d23424e724ca8e7fe466a\""
	I0318 13:11:04.468637    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:43Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-5b5d89c9d6-c2997_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a23c1189be7c39f22c8a59ba41b198519894c9783ecad8dcda79fb8a76f05254\""
	I0318 13:11:04.468637    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791205184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.468637    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791356085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.468637    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791396985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.468637    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791577685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.468637    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838312843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.468637    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838494344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.468817    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838510044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.468817    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838727044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.468817    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951016023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.468817    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951141424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.468817    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951152624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469040    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951369125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469083    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/066206d4c52cb784fe7c2001b5e196c6e3521560c412808e8d9ddf742aa008e4/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:04.469083    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.020194457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.469083    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.020684858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.469083    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.023241167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469159    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.023675469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469159    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc7236a19957e321c1961c944824f2b4624bd7a289ab4ecefe33a08d4af88e2b/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:04.469201    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6fb3325d3c1005ffbbbfe7b136924ed5ff0c71db51f79a50f7179c108c238d47/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:04.469201    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/354f3c44a34fcbd9ca7826066f2b4c40b3a6551aa632389815c5d24e85e0472b/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:04.469201    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396374926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.469201    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396436126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396447326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396626927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.467642467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.467879868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.468180469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.468559970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476573097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476618697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476631197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476702797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482324416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482501517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482648417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482918618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:48Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.545677603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.548609313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.548646013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.549168715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592129660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592185160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592195760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592280460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.615117337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.470237    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.615393238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.470286    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.615610139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.470337    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.621669759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.470337    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a9f21749669fe37a2d5bee9e7f45bf84eca6ae55870de8ac18bc774db7f81643/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:04.470420    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/41035eff3b7db97e56ba25be141bf644acea4ddcb9d92ba3b421e6fa855a09af/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:04.470458    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.995795822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.470513    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.995895422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.470513    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.995916522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.470560    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.996021523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.470560    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/86d74dec812cfd4842de7473d45ff320bfb2283d09d2d05eee29e45cbde9f5e9/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:04.470617    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171141514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.470617    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171335814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.470694    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171461415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.470730    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171764216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.470760    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.391481057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.470760    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.391826158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.470806    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.391990059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.470845    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.393600364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.470845    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1052]: time="2024-03-18T13:10:20.550892922Z" level=info msg="ignoring event" container=46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0318 13:11:04.470845    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:20.551487227Z" level=info msg="shim disconnected" id=46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264 namespace=moby
	I0318 13:11:04.470920    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:20.551627628Z" level=warning msg="cleaning up after shim disconnected" id=46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264 namespace=moby
	I0318 13:11:04.470956    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:20.551639828Z" level=info msg="cleaning up dead shim" namespace=moby
	I0318 13:11:04.470956    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.200900512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.470994    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.202882722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.203198024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.203763327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.250783392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.252016097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.252234698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.252566299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259013124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259187125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259204725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259319625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:10:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/97583cc14f115cf8a4e90889b5f2beda90a81f97fd592e5e5acff8d35e305a59/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:10:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e20878b8092c291820adeb66f1b491dcef85c0699c57800cced7d3530d2a07fb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.818847676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.818997976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.819021476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.819463578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825706506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825766006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825780706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825864707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:56 multinode-894400 dockerd[1052]: 2024/03/18 13:10:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:56 multinode-894400 dockerd[1052]: 2024/03/18 13:10:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471581    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471581    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471581    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471581    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471754    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471754    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471820    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471820    2404 command_runner.go:130] > Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471956    2404 command_runner.go:130] > Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471956    2404 command_runner.go:130] > Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471998    2404 command_runner.go:130] > Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.472059    2404 command_runner.go:130] > Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.472059    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.472059    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.472059    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.472059    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.472059    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.472059    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.472059    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.472059    2404 command_runner.go:130] > Mar 18 13:11:04 multinode-894400 dockerd[1052]: 2024/03/18 13:11:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.472059    2404 command_runner.go:130] > Mar 18 13:11:04 multinode-894400 dockerd[1052]: 2024/03/18 13:11:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.500630    2404 logs.go:123] Gathering logs for kube-proxy [9335855aab63] ...
	I0318 13:11:04.500630    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335855aab63"
	I0318 13:11:04.529018    2404 command_runner.go:130] ! I0318 12:47:42.888603       1 server_others.go:69] "Using iptables proxy"
	I0318 13:11:04.529018    2404 command_runner.go:130] ! I0318 12:47:42.909658       1 node.go:141] Successfully retrieved node IP: 172.30.129.141
	I0318 13:11:04.529018    2404 command_runner.go:130] ! I0318 12:47:42.965774       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:11:04.529018    2404 command_runner.go:130] ! I0318 12:47:42.965824       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:11:04.529018    2404 command_runner.go:130] ! I0318 12:47:42.983172       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:11:04.529387    2404 command_runner.go:130] ! I0318 12:47:42.983221       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:11:04.529387    2404 command_runner.go:130] ! I0318 12:47:42.983471       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:11:04.529387    2404 command_runner.go:130] ! I0318 12:47:42.983484       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:04.529387    2404 command_runner.go:130] ! I0318 12:47:42.987719       1 config.go:188] "Starting service config controller"
	I0318 13:11:04.529387    2404 command_runner.go:130] ! I0318 12:47:42.987733       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:11:04.529387    2404 command_runner.go:130] ! I0318 12:47:42.987775       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:11:04.529517    2404 command_runner.go:130] ! I0318 12:47:42.987781       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:11:04.529517    2404 command_runner.go:130] ! I0318 12:47:42.988298       1 config.go:315] "Starting node config controller"
	I0318 13:11:04.529600    2404 command_runner.go:130] ! I0318 12:47:42.988306       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:11:04.529600    2404 command_runner.go:130] ! I0318 12:47:43.088485       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:11:04.529600    2404 command_runner.go:130] ! I0318 12:47:43.088594       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:11:04.529635    2404 command_runner.go:130] ! I0318 12:47:43.088517       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:11:04.532124    2404 logs.go:123] Gathering logs for kube-controller-manager [4ad6784a187d] ...
	I0318 13:11:04.532176    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ad6784a187d"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:46.053304       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:46.598188       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:46.598275       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:46.600550       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:46.600856       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:46.601228       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:46.601416       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:50.365580       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:50.380467       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:50.380609       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:50.380622       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:50.396606       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:50.396766       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:50.466364       1 shared_informer.go:318] Caches are synced for tokens
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:10:00.425018       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:10:00.425185       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:10:00.425608       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:10:00.425649       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:10:00.429368       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 13:11:04.566410    2404 command_runner.go:130] ! I0318 13:10:00.429570       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 13:11:04.566410    2404 command_runner.go:130] ! I0318 13:10:00.429653       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 13:11:04.566410    2404 command_runner.go:130] ! I0318 13:10:00.432615       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.435149       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.435476       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.435957       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.436324       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.436534       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 13:11:04.566504    2404 command_runner.go:130] ! E0318 13:10:00.440226       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.440586       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! E0318 13:10:00.443615       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.443912       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.446716       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.446764       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.447388       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.450136       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.450514       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.450816       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.482128       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.482431       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.482564       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.485138       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.485477       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.485637       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.485765       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.487736       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.488836       1 gc_controller.go:101] "Starting GC controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.489018       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.490586       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.491164       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.491311       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.494562       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.495002       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.495133       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.497694       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.497986       1 expand_controller.go:328] "Starting expand controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.498025       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.500933       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.502880       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 13:11:04.567065    2404 command_runner.go:130] ! I0318 13:10:00.503102       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 13:11:04.567272    2404 command_runner.go:130] ! I0318 13:10:00.506760       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 13:11:04.567272    2404 command_runner.go:130] ! I0318 13:10:00.507227       1 disruption.go:433] "Sending events to api server."
	I0318 13:11:04.567272    2404 command_runner.go:130] ! I0318 13:10:00.507302       1 disruption.go:444] "Starting disruption controller"
	I0318 13:11:04.567347    2404 command_runner.go:130] ! I0318 13:10:00.507366       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 13:11:04.567347    2404 command_runner.go:130] ! I0318 13:10:00.509815       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 13:11:04.567347    2404 command_runner.go:130] ! I0318 13:10:00.510402       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 13:11:04.567401    2404 command_runner.go:130] ! I0318 13:10:00.510478       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 13:11:04.567401    2404 command_runner.go:130] ! I0318 13:10:00.514582       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 13:11:04.567439    2404 command_runner.go:130] ! I0318 13:10:00.514842       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 13:11:04.567462    2404 command_runner.go:130] ! I0318 13:10:00.514832       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:04.567462    2404 command_runner.go:130] ! I0318 13:10:00.517859       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 13:11:04.567462    2404 command_runner.go:130] ! I0318 13:10:00.518134       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 13:11:04.567462    2404 command_runner.go:130] ! I0318 13:10:00.518434       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:04.567462    2404 command_runner.go:130] ! I0318 13:10:00.519400       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 13:11:04.567579    2404 command_runner.go:130] ! I0318 13:10:00.519576       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 13:11:04.567579    2404 command_runner.go:130] ! I0318 13:10:00.519729       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:04.567579    2404 command_runner.go:130] ! I0318 13:10:00.519883       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 13:11:04.567579    2404 command_runner.go:130] ! I0318 13:10:00.519902       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 13:11:04.567579    2404 command_runner.go:130] ! I0318 13:10:00.520909       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 13:11:04.567579    2404 command_runner.go:130] ! I0318 13:10:00.519914       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:04.567710    2404 command_runner.go:130] ! I0318 13:10:00.524690       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 13:11:04.567710    2404 command_runner.go:130] ! I0318 13:10:00.524967       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 13:11:04.567710    2404 command_runner.go:130] ! I0318 13:10:00.525267       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 13:11:04.567710    2404 command_runner.go:130] ! I0318 13:10:00.528248       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 13:11:04.567710    2404 command_runner.go:130] ! I0318 13:10:00.528509       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 13:11:04.567710    2404 command_runner.go:130] ! I0318 13:10:00.528721       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 13:11:04.567710    2404 command_runner.go:130] ! I0318 13:10:00.532254       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 13:11:04.567710    2404 command_runner.go:130] ! I0318 13:10:00.532687       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 13:11:04.567827    2404 command_runner.go:130] ! I0318 13:10:00.532717       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 13:11:04.567827    2404 command_runner.go:130] ! I0318 13:10:00.544900       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 13:11:04.567827    2404 command_runner.go:130] ! I0318 13:10:00.545135       1 horizontal.go:200] "Starting HPA controller"
	I0318 13:11:04.567827    2404 command_runner.go:130] ! I0318 13:10:00.545195       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 13:11:04.567827    2404 command_runner.go:130] ! I0318 13:10:00.547641       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 13:11:04.567827    2404 command_runner.go:130] ! I0318 13:10:00.548078       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 13:11:04.567827    2404 command_runner.go:130] ! I0318 13:10:00.550784       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 13:11:04.567827    2404 command_runner.go:130] ! I0318 13:10:00.551368       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.551557       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.551931       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.551452       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.553190       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.553856       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.554970       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.555558       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.555718       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.558545       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.558805       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.558956       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 13:11:04.568206    2404 command_runner.go:130] ! W0318 13:10:00.765746       1 shared_informer.go:593] resyncPeriod 13h51m37.636447347s is smaller than resyncCheckPeriod 22h45m34.293132222s and the informer has already started. Changing it to 22h45m34.293132222s
	I0318 13:11:04.568206    2404 command_runner.go:130] ! I0318 13:10:00.765905       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 13:11:04.568206    2404 command_runner.go:130] ! I0318 13:10:00.766015       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 13:11:04.568206    2404 command_runner.go:130] ! I0318 13:10:00.766141       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 13:11:04.568206    2404 command_runner.go:130] ! I0318 13:10:00.766231       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 13:11:04.568206    2404 command_runner.go:130] ! I0318 13:10:00.767946       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 13:11:04.568206    2404 command_runner.go:130] ! I0318 13:10:00.768138       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 13:11:04.568337    2404 command_runner.go:130] ! I0318 13:10:00.768175       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 13:11:04.568337    2404 command_runner.go:130] ! I0318 13:10:00.768271       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 13:11:04.568337    2404 command_runner.go:130] ! I0318 13:10:00.768411       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 13:11:04.568337    2404 command_runner.go:130] ! I0318 13:10:00.768529       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 13:11:04.568337    2404 command_runner.go:130] ! I0318 13:10:00.768565       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 13:11:04.568337    2404 command_runner.go:130] ! I0318 13:10:00.768633       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 13:11:04.568481    2404 command_runner.go:130] ! W0318 13:10:00.768841       1 shared_informer.go:593] resyncPeriod 17h39m7.901162259s is smaller than resyncCheckPeriod 22h45m34.293132222s and the informer has already started. Changing it to 22h45m34.293132222s
	I0318 13:11:04.568481    2404 command_runner.go:130] ! I0318 13:10:00.769020       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 13:11:04.568481    2404 command_runner.go:130] ! I0318 13:10:00.769077       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 13:11:04.568481    2404 command_runner.go:130] ! I0318 13:10:00.769115       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 13:11:04.568481    2404 command_runner.go:130] ! I0318 13:10:00.769206       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 13:11:04.568481    2404 command_runner.go:130] ! I0318 13:10:00.769280       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 13:11:04.568586    2404 command_runner.go:130] ! I0318 13:10:00.769427       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 13:11:04.568586    2404 command_runner.go:130] ! I0318 13:10:00.769509       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 13:11:04.568586    2404 command_runner.go:130] ! I0318 13:10:00.769668       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 13:11:04.568586    2404 command_runner.go:130] ! I0318 13:10:00.769816       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 13:11:04.568586    2404 command_runner.go:130] ! I0318 13:10:00.769832       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:11:04.568586    2404 command_runner.go:130] ! I0318 13:10:00.769855       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 13:11:04.568586    2404 command_runner.go:130] ! I0318 13:10:00.769714       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 13:11:04.568790    2404 command_runner.go:130] ! I0318 13:10:00.906184       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 13:11:04.568790    2404 command_runner.go:130] ! I0318 13:10:00.906404       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 13:11:04.568790    2404 command_runner.go:130] ! I0318 13:10:00.906702       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:11:04.568790    2404 command_runner.go:130] ! I0318 13:10:00.906740       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 13:11:04.568853    2404 command_runner.go:130] ! I0318 13:10:00.956245       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 13:11:04.568853    2404 command_runner.go:130] ! I0318 13:10:00.956457       1 job_controller.go:226] "Starting job controller"
	I0318 13:11:04.568897    2404 command_runner.go:130] ! I0318 13:10:00.956765       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 13:11:04.568897    2404 command_runner.go:130] ! I0318 13:10:01.056144       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 13:11:04.568897    2404 command_runner.go:130] ! I0318 13:10:01.056251       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 13:11:04.568897    2404 command_runner.go:130] ! I0318 13:10:01.056576       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 13:11:04.568897    2404 command_runner.go:130] ! I0318 13:10:01.156303       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 13:11:04.569008    2404 command_runner.go:130] ! I0318 13:10:01.156762       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 13:11:04.569008    2404 command_runner.go:130] ! I0318 13:10:01.156852       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 13:11:04.569008    2404 command_runner.go:130] ! I0318 13:10:01.205282       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 13:11:04.569008    2404 command_runner.go:130] ! I0318 13:10:01.205353       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 13:11:04.569008    2404 command_runner.go:130] ! I0318 13:10:01.205368       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 13:11:04.569080    2404 command_runner.go:130] ! I0318 13:10:01.256513       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 13:11:04.569080    2404 command_runner.go:130] ! I0318 13:10:01.256828       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 13:11:04.569080    2404 command_runner.go:130] ! I0318 13:10:01.256867       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 13:11:04.569080    2404 command_runner.go:130] ! I0318 13:10:01.306581       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 13:11:04.569080    2404 command_runner.go:130] ! I0318 13:10:01.306969       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 13:11:04.569080    2404 command_runner.go:130] ! I0318 13:10:01.307156       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 13:11:04.569189    2404 command_runner.go:130] ! I0318 13:10:01.317298       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:11:04.569189    2404 command_runner.go:130] ! I0318 13:10:01.349149       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 13:11:04.569189    2404 command_runner.go:130] ! I0318 13:10:01.369957       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:11:04.569189    2404 command_runner.go:130] ! I0318 13:10:01.371629       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400\" does not exist"
	I0318 13:11:04.569189    2404 command_runner.go:130] ! I0318 13:10:01.371840       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m02\" does not exist"
	I0318 13:11:04.569189    2404 command_runner.go:130] ! I0318 13:10:01.372556       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:04.569303    2404 command_runner.go:130] ! I0318 13:10:01.372879       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m03\" does not exist"
	I0318 13:11:04.569303    2404 command_runner.go:130] ! I0318 13:10:01.373004       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:04.569303    2404 command_runner.go:130] ! I0318 13:10:01.380690       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 13:11:04.569303    2404 command_runner.go:130] ! I0318 13:10:01.383858       1 shared_informer.go:318] Caches are synced for namespace
	I0318 13:11:04.569303    2404 command_runner.go:130] ! I0318 13:10:01.390400       1 shared_informer.go:318] Caches are synced for GC
	I0318 13:11:04.569303    2404 command_runner.go:130] ! I0318 13:10:01.391669       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 13:11:04.569303    2404 command_runner.go:130] ! I0318 13:10:01.398208       1 shared_informer.go:318] Caches are synced for expand
	I0318 13:11:04.569303    2404 command_runner.go:130] ! I0318 13:10:01.403691       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 13:11:04.569435    2404 command_runner.go:130] ! I0318 13:10:01.406154       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 13:11:04.569435    2404 command_runner.go:130] ! I0318 13:10:01.407387       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 13:11:04.569435    2404 command_runner.go:130] ! I0318 13:10:01.407463       1 shared_informer.go:318] Caches are synced for disruption
	I0318 13:11:04.569435    2404 command_runner.go:130] ! I0318 13:10:01.411470       1 shared_informer.go:318] Caches are synced for deployment
	I0318 13:11:04.569435    2404 command_runner.go:130] ! I0318 13:10:01.415591       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 13:11:04.569435    2404 command_runner.go:130] ! I0318 13:10:01.419985       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 13:11:04.569435    2404 command_runner.go:130] ! I0318 13:10:01.420028       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 13:11:04.569553    2404 command_runner.go:130] ! I0318 13:10:01.422567       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 13:11:04.569553    2404 command_runner.go:130] ! I0318 13:10:01.426386       1 shared_informer.go:318] Caches are synced for node
	I0318 13:11:04.569553    2404 command_runner.go:130] ! I0318 13:10:01.426502       1 range_allocator.go:174] "Sending events to api server"
	I0318 13:11:04.569553    2404 command_runner.go:130] ! I0318 13:10:01.426637       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 13:11:04.569553    2404 command_runner.go:130] ! I0318 13:10:01.426705       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 13:11:04.569553    2404 command_runner.go:130] ! I0318 13:10:01.426892       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 13:11:04.569553    2404 command_runner.go:130] ! I0318 13:10:01.426546       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 13:11:04.569553    2404 command_runner.go:130] ! I0318 13:10:01.429986       1 shared_informer.go:318] Caches are synced for service account
	I0318 13:11:04.569677    2404 command_runner.go:130] ! I0318 13:10:01.430014       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 13:11:04.569677    2404 command_runner.go:130] ! I0318 13:10:01.433506       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 13:11:04.569677    2404 command_runner.go:130] ! I0318 13:10:01.437710       1 shared_informer.go:318] Caches are synced for TTL
	I0318 13:11:04.569677    2404 command_runner.go:130] ! I0318 13:10:01.445429       1 shared_informer.go:318] Caches are synced for HPA
	I0318 13:11:04.569677    2404 command_runner.go:130] ! I0318 13:10:01.448863       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 13:11:04.569677    2404 command_runner.go:130] ! I0318 13:10:01.451599       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 13:11:04.569677    2404 command_runner.go:130] ! I0318 13:10:01.454157       1 shared_informer.go:318] Caches are synced for taint
	I0318 13:11:04.569677    2404 command_runner.go:130] ! I0318 13:10:01.454304       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 13:11:04.569677    2404 command_runner.go:130] ! I0318 13:10:01.454496       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 13:11:04.569797    2404 command_runner.go:130] ! I0318 13:10:01.454532       1 taint_manager.go:210] "Sending events to api server"
	I0318 13:11:04.569797    2404 command_runner.go:130] ! I0318 13:10:01.455374       1 event.go:307] "Event occurred" object="multinode-894400" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400 event: Registered Node multinode-894400 in Controller"
	I0318 13:11:04.569797    2404 command_runner.go:130] ! I0318 13:10:01.455390       1 event.go:307] "Event occurred" object="multinode-894400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller"
	I0318 13:11:04.569797    2404 command_runner.go:130] ! I0318 13:10:01.455400       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller"
	I0318 13:11:04.569797    2404 command_runner.go:130] ! I0318 13:10:01.456700       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 13:11:04.569896    2404 command_runner.go:130] ! I0318 13:10:01.456719       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 13:11:04.569896    2404 command_runner.go:130] ! I0318 13:10:01.457835       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 13:11:04.569896    2404 command_runner.go:130] ! I0318 13:10:01.457861       1 shared_informer.go:318] Caches are synced for job
	I0318 13:11:04.569896    2404 command_runner.go:130] ! I0318 13:10:01.458132       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 13:11:04.569896    2404 command_runner.go:130] ! I0318 13:10:01.499926       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 13:11:04.569896    2404 command_runner.go:130] ! I0318 13:10:01.502022       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400"
	I0318 13:11:04.569896    2404 command_runner.go:130] ! I0318 13:10:01.502582       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m02"
	I0318 13:11:04.569896    2404 command_runner.go:130] ! I0318 13:10:01.502665       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m03"
	I0318 13:11:04.570013    2404 command_runner.go:130] ! I0318 13:10:01.505439       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0318 13:11:04.570013    2404 command_runner.go:130] ! I0318 13:10:01.518153       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:11:04.570013    2404 command_runner.go:130] ! I0318 13:10:01.524442       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="116.887006ms"
	I0318 13:11:04.570013    2404 command_runner.go:130] ! I0318 13:10:01.526447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="62.302µs"
	I0318 13:11:04.570013    2404 command_runner.go:130] ! I0318 13:10:01.532190       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="124.57225ms"
	I0318 13:11:04.570013    2404 command_runner.go:130] ! I0318 13:10:01.532535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.501µs"
	I0318 13:11:04.570013    2404 command_runner.go:130] ! I0318 13:10:01.536870       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 13:11:04.570114    2404 command_runner.go:130] ! I0318 13:10:01.559571       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 13:11:04.570114    2404 command_runner.go:130] ! I0318 13:10:01.576497       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:11:04.570114    2404 command_runner.go:130] ! I0318 13:10:01.970420       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:11:04.570114    2404 command_runner.go:130] ! I0318 13:10:02.008120       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:11:04.570114    2404 command_runner.go:130] ! I0318 13:10:02.008146       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 13:11:04.570114    2404 command_runner.go:130] ! I0318 13:10:23.798396       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:04.570114    2404 command_runner.go:130] ! I0318 13:10:26.538088       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-456tm" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-456tm"
	I0318 13:11:04.570217    2404 command_runner.go:130] ! I0318 13:10:26.538124       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-c2997" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-c2997"
	I0318 13:11:04.570217    2404 command_runner.go:130] ! I0318 13:10:26.538134       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0318 13:11:04.570217    2404 command_runner.go:130] ! I0318 13:10:41.556645       1 event.go:307] "Event occurred" object="multinode-894400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-894400-m02 status is now: NodeNotReady"
	I0318 13:11:04.570217    2404 command_runner.go:130] ! I0318 13:10:41.569274       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8btgf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:04.570316    2404 command_runner.go:130] ! I0318 13:10:41.592766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="22.447202ms"
	I0318 13:11:04.570316    2404 command_runner.go:130] ! I0318 13:10:41.593427       1 event.go:307] "Event occurred" object="kube-system/kindnet-k5lpg" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:04.570316    2404 command_runner.go:130] ! I0318 13:10:41.595199       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="39.101µs"
	I0318 13:11:04.570316    2404 command_runner.go:130] ! I0318 13:10:41.617007       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-8bdmn" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:04.570316    2404 command_runner.go:130] ! I0318 13:10:54.102255       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.438427ms"
	I0318 13:11:04.570316    2404 command_runner.go:130] ! I0318 13:10:54.102713       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="266.302µs"
	I0318 13:11:04.570420    2404 command_runner.go:130] ! I0318 13:10:54.115993       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="210.701µs"
	I0318 13:11:04.570420    2404 command_runner.go:130] ! I0318 13:10:55.131550       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.807636ms"
	I0318 13:11:04.570420    2404 command_runner.go:130] ! I0318 13:10:55.131763       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.301µs"
	I0318 13:11:04.584621    2404 logs.go:123] Gathering logs for dmesg ...
	I0318 13:11:04.584621    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:11:04.604641    2404 command_runner.go:130] > [Mar18 13:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0318 13:11:04.604782    2404 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0318 13:11:04.604782    2404 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0318 13:11:04.604782    2404 command_runner.go:130] > [  +0.127438] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0318 13:11:04.604855    2404 command_runner.go:130] > [  +0.022457] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0318 13:11:04.604855    2404 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0318 13:11:04.604855    2404 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0318 13:11:04.604855    2404 command_runner.go:130] > [  +0.054196] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0318 13:11:04.604937    2404 command_runner.go:130] > [  +0.018424] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0318 13:11:04.604937    2404 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0318 13:11:04.604937    2404 command_runner.go:130] > [  +4.800453] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0318 13:11:04.604937    2404 command_runner.go:130] > [  +1.267636] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0318 13:11:04.605006    2404 command_runner.go:130] > [  +1.056053] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	I0318 13:11:04.605006    2404 command_runner.go:130] > [  +6.778211] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0318 13:11:04.605006    2404 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0318 13:11:04.605006    2404 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0318 13:11:04.605006    2404 command_runner.go:130] > [Mar18 13:09] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	I0318 13:11:04.605073    2404 command_runner.go:130] > [  +0.160643] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	I0318 13:11:04.605073    2404 command_runner.go:130] > [ +25.236158] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0318 13:11:04.605073    2404 command_runner.go:130] > [  +0.093711] kauditd_printk_skb: 73 callbacks suppressed
	I0318 13:11:04.605073    2404 command_runner.go:130] > [  +0.488652] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	I0318 13:11:04.605073    2404 command_runner.go:130] > [  +0.198307] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	I0318 13:11:04.605191    2404 command_runner.go:130] > [  +0.213157] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	I0318 13:11:04.605222    2404 command_runner.go:130] > [  +2.866452] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	I0318 13:11:04.605222    2404 command_runner.go:130] > [  +0.191537] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	I0318 13:11:04.605222    2404 command_runner.go:130] > [  +0.163904] systemd-fstab-generator[1255]: Ignoring "noauto" option for root device
	I0318 13:11:04.605222    2404 command_runner.go:130] > [  +0.280650] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	I0318 13:11:04.605283    2404 command_runner.go:130] > [  +0.822319] systemd-fstab-generator[1393]: Ignoring "noauto" option for root device
	I0318 13:11:04.605283    2404 command_runner.go:130] > [  +0.094744] kauditd_printk_skb: 205 callbacks suppressed
	I0318 13:11:04.605283    2404 command_runner.go:130] > [  +3.177820] systemd-fstab-generator[1525]: Ignoring "noauto" option for root device
	I0318 13:11:04.605283    2404 command_runner.go:130] > [  +1.898187] kauditd_printk_skb: 64 callbacks suppressed
	I0318 13:11:04.605283    2404 command_runner.go:130] > [  +5.227041] kauditd_printk_skb: 10 callbacks suppressed
	I0318 13:11:04.605283    2404 command_runner.go:130] > [  +4.065141] systemd-fstab-generator[3089]: Ignoring "noauto" option for root device
	I0318 13:11:04.605283    2404 command_runner.go:130] > [Mar18 13:10] kauditd_printk_skb: 70 callbacks suppressed
	I0318 13:11:04.607197    2404 logs.go:123] Gathering logs for coredns [3c3bc988c74c] ...
	I0318 13:11:04.607197    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c3bc988c74c"
	I0318 13:11:04.642071    2404 command_runner.go:130] > .:53
	I0318 13:11:04.642071    2404 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fa2b120b3007aba3ef70c07d73473d14986404bfc5683cceeb89e8950d5ace894afca7d43365324975f99d1b2da3d6473c722fe82795d39a4ee8b4b969b7aa71
	I0318 13:11:04.642071    2404 command_runner.go:130] > CoreDNS-1.10.1
	I0318 13:11:04.642071    2404 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 13:11:04.642246    2404 command_runner.go:130] > [INFO] 127.0.0.1:47251 - 801 "HINFO IN 2968659138506762197.6766024496084331989. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051583557s
	I0318 13:11:04.642355    2404 logs.go:123] Gathering logs for coredns [693a64f7472f] ...
	I0318 13:11:04.642355    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 693a64f7472f"
	I0318 13:11:04.670277    2404 command_runner.go:130] > .:53
	I0318 13:11:04.670277    2404 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fa2b120b3007aba3ef70c07d73473d14986404bfc5683cceeb89e8950d5ace894afca7d43365324975f99d1b2da3d6473c722fe82795d39a4ee8b4b969b7aa71
	I0318 13:11:04.670277    2404 command_runner.go:130] > CoreDNS-1.10.1
	I0318 13:11:04.670277    2404 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 13:11:04.670510    2404 command_runner.go:130] > [INFO] 127.0.0.1:33426 - 38858 "HINFO IN 7345450223813584863.4065419873971828575. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030234917s
	I0318 13:11:04.670510    2404 command_runner.go:130] > [INFO] 10.244.1.2:56777 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000311303s
	I0318 13:11:04.670510    2404 command_runner.go:130] > [INFO] 10.244.1.2:58024 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.098073876s
	I0318 13:11:04.670510    2404 command_runner.go:130] > [INFO] 10.244.1.2:57941 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.154978742s
	I0318 13:11:04.670510    2404 command_runner.go:130] > [INFO] 10.244.1.2:42576 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.156414777s
	I0318 13:11:04.670510    2404 command_runner.go:130] > [INFO] 10.244.0.3:43391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152802s
	I0318 13:11:04.670510    2404 command_runner.go:130] > [INFO] 10.244.0.3:52523 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000121101s
	I0318 13:11:04.670510    2404 command_runner.go:130] > [INFO] 10.244.0.3:36187 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000058401s
	I0318 13:11:04.670510    2404 command_runner.go:130] > [INFO] 10.244.0.3:33451 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000055s
	I0318 13:11:04.670654    2404 command_runner.go:130] > [INFO] 10.244.1.2:42180 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097901s
	I0318 13:11:04.670654    2404 command_runner.go:130] > [INFO] 10.244.1.2:60616 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.142731308s
	I0318 13:11:04.670654    2404 command_runner.go:130] > [INFO] 10.244.1.2:45190 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152502s
	I0318 13:11:04.670654    2404 command_runner.go:130] > [INFO] 10.244.1.2:55984 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150102s
	I0318 13:11:04.670654    2404 command_runner.go:130] > [INFO] 10.244.1.2:47725 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037970075s
	I0318 13:11:04.670654    2404 command_runner.go:130] > [INFO] 10.244.1.2:55620 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104901s
	I0318 13:11:04.670654    2404 command_runner.go:130] > [INFO] 10.244.1.2:60349 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000189802s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.1.2:44081 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089501s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:52580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182502s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:60982 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000727s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:53685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:38117 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127701s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:38455 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000117101s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:50629 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121702s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:33301 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000487s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:38091 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138402s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.1.2:43364 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192902s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.1.2:42609 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060701s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.1.2:36443 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051301s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.1.2:56414 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000526s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:50774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137201s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:43237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000196902s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:38831 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059901s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:56163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122801s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.1.2:58305 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209602s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.1.2:58291 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000151202s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.1.2:33227 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184302s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.1.2:58179 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152102s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:46943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104101s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:58018 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000107001s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:35353 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119601s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:58763 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075701s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0318 13:11:04.673698    2404 logs.go:123] Gathering logs for kube-proxy [163ccabc3882] ...
	I0318 13:11:04.673698    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 163ccabc3882"
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.786718       1 server_others.go:69] "Using iptables proxy"
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.833991       1 node.go:141] Successfully retrieved node IP: 172.30.130.156
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.913665       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.913704       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.924640       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.925588       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.926722       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.926981       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.938764       1 config.go:188] "Starting service config controller"
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.949206       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.949220       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.953299       1 config.go:315] "Starting node config controller"
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.979020       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.990249       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.958488       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.996356       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:51.051947       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:11:04.705804    2404 logs.go:123] Gathering logs for kindnet [c8e5ec25e910] ...
	I0318 13:11:04.705804    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e5ec25e910"
	I0318 13:11:04.733290    2404 command_runner.go:130] ! I0318 13:09:50.858529       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0318 13:11:04.733290    2404 command_runner.go:130] ! I0318 13:09:50.859271       1 main.go:107] hostIP = 172.30.130.156
	I0318 13:11:04.733655    2404 command_runner.go:130] ! podIP = 172.30.130.156
	I0318 13:11:04.733655    2404 command_runner.go:130] ! I0318 13:09:50.860380       1 main.go:116] setting mtu 1500 for CNI 
	I0318 13:11:04.733655    2404 command_runner.go:130] ! I0318 13:09:50.930132       1 main.go:146] kindnetd IP family: "ipv4"
	I0318 13:11:04.733655    2404 command_runner.go:130] ! I0318 13:09:50.933463       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:21.283853       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:21.335833       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:21.335942       1 main.go:227] handling current node
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:21.336264       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:21.336361       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:21.336527       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.30.140.66 Flags: [] Table: 0} 
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:21.336670       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:21.336680       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:21.336727       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.30.137.140 Flags: [] Table: 0} 
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:31.343996       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:31.344324       1 main.go:227] handling current node
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:31.344341       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:31.344682       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:31.345062       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:31.345087       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:41.357494       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:41.357586       1 main.go:227] handling current node
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:41.357599       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:41.357606       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:41.357708       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:41.357932       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:51.367560       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:51.367661       1 main.go:227] handling current node
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:51.367675       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:51.367684       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:51.367956       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:51.368281       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:11:01.381870       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:11:01.381898       1 main.go:227] handling current node
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:11:01.381909       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:11:01.381915       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:04.734264    2404 command_runner.go:130] ! I0318 13:11:01.382152       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:04.734264    2404 command_runner.go:130] ! I0318 13:11:01.382182       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:04.736821    2404 logs.go:123] Gathering logs for kubelet ...
	I0318 13:11:04.736821    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:11:04.768714    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 13:11:04.768714    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: I0318 13:09:39.912330    1399 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: I0318 13:09:39.913472    1399 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: I0318 13:09:39.914280    1399 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: E0318 13:09:39.914469    1399 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: I0318 13:09:40.661100    1455 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: I0318 13:09:40.661586    1455 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: I0318 13:09:40.662255    1455 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: E0318 13:09:40.662383    1455 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.774439    1532 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.775083    1532 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.775946    1532 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.785429    1532 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.801370    1532 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.849790    1532 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851652    1532 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0318 13:11:04.769314    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851916    1532 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0318 13:11:04.769353    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851957    1532 topology_manager.go:138] "Creating topology manager with none policy"
	I0318 13:11:04.769353    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851967    1532 container_manager_linux.go:301] "Creating device plugin manager"
	I0318 13:11:04.769353    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.853347    1532 state_mem.go:36] "Initialized new in-memory state store"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.855331    1532 kubelet.go:393] "Attempting to sync node with API server"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.855456    1532 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.856520    1532 kubelet.go:309] "Adding apiserver pod source"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.856554    1532 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.859153    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.859647    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.860993    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.861168    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.872782    1532 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="docker" version="25.0.4" apiVersion="v1"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.875640    1532 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.876823    1532 server.go:1232] "Started kubelet"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.878282    1532 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.879215    1532 server.go:462] "Adding debug handlers to kubelet server"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.882881    1532 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.883660    1532 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.878365    1532 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.886734    1532 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-894400.17bddddee5b23bca", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-894400", UID:"multinode-894400", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-894400"}, FirstTimestamp:time.Date(2024, ti
me.March, 18, 13, 9, 42, 876797898, time.Local), LastTimestamp:time.Date(2024, time.March, 18, 13, 9, 42, 876797898, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-894400"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.30.130.156:8443: connect: connection refused'(may retry after sleeping)
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.886969    1532 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.887086    1532 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.907405    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.769987    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.907883    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.770027    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.910785    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="200ms"
	I0318 13:11:04.770027    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.959085    1532 reconciler_new.go:29] "Reconciler: start to sync state"
	I0318 13:11:04.770027    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.981490    1532 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0318 13:11:04.770144    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.981531    1532 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0318 13:11:04.770164    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.981561    1532 state_mem.go:36] "Initialized new in-memory state store"
	I0318 13:11:04.770164    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.982644    1532 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0318 13:11:04.770225    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.982700    1532 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0318 13:11:04.770247    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.982728    1532 policy_none.go:49] "None policy: Start"
	I0318 13:11:04.770325    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.989705    1532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0318 13:11:04.770325    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.002857    1532 memory_manager.go:169] "Starting memorymanager" policy="None"
	I0318 13:11:04.770325    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.003620    1532 state_mem.go:35] "Initializing new in-memory state store"
	I0318 13:11:04.770325    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.004623    1532 state_mem.go:75] "Updated machine memory state"
	I0318 13:11:04.770409    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.006120    1532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0318 13:11:04.770409    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.007397    1532 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0318 13:11:04.770409    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.008604    1532 kubelet.go:2303] "Starting kubelet main sync loop"
	I0318 13:11:04.770516    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.008971    1532 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0318 13:11:04.770516    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.016115    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:11:04.770516    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.018685    1532 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 13:11:04.770579    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 13:11:04.770603    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 13:11:04.770603    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 13:11:04.770603    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.021241    1532 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: W0318 13:09:43.022840    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.022916    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.022979    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.023116    1532 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.041923    1532 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-894400\" not found"
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.112352    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="400ms"
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.113553    1532 topology_manager.go:215] "Topology Admit Handler" podUID="1c745e9b917877b1ff3c90ed02e9a79a" podNamespace="kube-system" podName="kube-scheduler-multinode-894400"
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.126661    1532 topology_manager.go:215] "Topology Admit Handler" podUID="6096c2227c4230453f65f86ebdcd0d95" podNamespace="kube-system" podName="kube-apiserver-multinode-894400"
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.137838    1532 topology_manager.go:215] "Topology Admit Handler" podUID="d340aced56ba169ecac1e3ac58ad57fe" podNamespace="kube-system" podName="kube-controller-manager-multinode-894400"
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.154701    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5485f509825d9272a84959cbcfbb4f0187be886867ba7bac76fa00a35e34bdd1"
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.154826    1532 topology_manager.go:215] "Topology Admit Handler" podUID="743a549b698f93b8586a236f83c90556" podNamespace="kube-system" podName="etcd-multinode-894400"
	I0318 13:11:04.771201    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171660    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d001e299e996b74f090c73f4e98605ef0d323a96826d23424e724ca8e7fe466a"
	I0318 13:11:04.771201    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171681    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60e9cd749c8f67d0bc24596b26b654cf85a82055f89e14c4a14a4e9342f5fc9f"
	I0318 13:11:04.771201    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171704    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acffce2e73842c3e46177a77ddd5a8d308b51daf062cac439cc487cc863c4226"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171714    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="265b39e386cfa82eef9715aba314fbf8a9292776816cf86ed4099004698cb320"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171723    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="220884cbf1f5b852987c5a28277a4914502f0623413c284054afa92791494c50"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171731    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a47b1fb60692cee0c4ed89ecc511fa046c0873051f7daf026f1c5c6a3dfd7352"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.172283    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82710777e700c4f2e71da911834959efc480f8ba2a526049f0f6c238947c5146"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.186382    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a23c1189be7c39f22c8a59ba41b198519894c9783ecad8dcda79fb8a76f05254"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.231617    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.233479    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.267903    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c745e9b917877b1ff3c90ed02e9a79a-kubeconfig\") pod \"kube-scheduler-multinode-894400\" (UID: \"1c745e9b917877b1ff3c90ed02e9a79a\") " pod="kube-system/kube-scheduler-multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268106    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6096c2227c4230453f65f86ebdcd0d95-ca-certs\") pod \"kube-apiserver-multinode-894400\" (UID: \"6096c2227c4230453f65f86ebdcd0d95\") " pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268214    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-ca-certs\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268242    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-kubeconfig\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268269    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268295    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/743a549b698f93b8586a236f83c90556-etcd-certs\") pod \"etcd-multinode-894400\" (UID: \"743a549b698f93b8586a236f83c90556\") " pod="kube-system/etcd-multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268330    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/743a549b698f93b8586a236f83c90556-etcd-data\") pod \"etcd-multinode-894400\" (UID: \"743a549b698f93b8586a236f83c90556\") " pod="kube-system/etcd-multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268361    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6096c2227c4230453f65f86ebdcd0d95-k8s-certs\") pod \"kube-apiserver-multinode-894400\" (UID: \"6096c2227c4230453f65f86ebdcd0d95\") " pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268423    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6096c2227c4230453f65f86ebdcd0d95-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-894400\" (UID: \"6096c2227c4230453f65f86ebdcd0d95\") " pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268445    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-flexvolume-dir\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:11:04.771797    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268537    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-k8s-certs\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:11:04.771797    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.513563    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="800ms"
	I0318 13:11:04.771797    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.656950    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:11:04.771797    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.658595    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:11:04.771943    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: W0318 13:09:43.917173    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.771985    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.917511    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.772016    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: W0318 13:09:44.022640    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.772016    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.022973    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.772079    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: W0318 13:09:44.114653    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.772079    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.114784    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: I0318 13:09:44.229821    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="354f3c44a34fcbd9ca7826066f2b4c40b3a6551aa632389815c5d24e85e0472b"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.315351    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="1.6s"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: W0318 13:09:44.368370    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.368575    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: I0318 13:09:44.495686    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.496847    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:46 multinode-894400 kubelet[1532]: I0318 13:09:46.112867    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.454296    1532 kubelet_node_status.go:108] "Node was previously registered" node="multinode-894400"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.454504    1532 kubelet_node_status.go:73] "Successfully registered node" node="multinode-894400"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.466215    1532 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.467399    1532 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.481710    1532 setters.go:552] "Node became not ready" node="multinode-894400" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-18T13:09:48Z","lastTransitionTime":"2024-03-18T13:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.865400    1532 apiserver.go:52] "Watching apiserver"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872433    1532 topology_manager.go:215] "Topology Admit Handler" podUID="0afe25f8-cbd6-412b-8698-7b547d1d49ca" podNamespace="kube-system" podName="kube-proxy-mc5tv"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872584    1532 topology_manager.go:215] "Topology Admit Handler" podUID="0161d239-2d85-4246-b2fa-6c7374f2ecd6" podNamespace="kube-system" podName="kindnet-hhsxh"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872794    1532 topology_manager.go:215] "Topology Admit Handler" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67" podNamespace="kube-system" podName="coredns-5dd5756b68-456tm"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872862    1532 topology_manager.go:215] "Topology Admit Handler" podUID="219bafbc-d807-44cf-9927-e4957f36ad70" podNamespace="kube-system" podName="storage-provisioner"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872944    1532 topology_manager.go:215] "Topology Admit Handler" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f" podNamespace="default" podName="busybox-5b5d89c9d6-c2997"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.873248    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.873593    1532 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-894400" podUID="62aca0ea-36b0-4841-9616-61448f45e04a"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.873861    1532 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-894400" podUID="672a85d9-7526-4870-a33a-eac509ef3c3f"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.876751    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.772686    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.889248    1532 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0318 13:11:04.772686    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.964782    1532 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.965861    1532 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-894400"
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966709    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0afe25f8-cbd6-412b-8698-7b547d1d49ca-lib-modules\") pod \"kube-proxy-mc5tv\" (UID: \"0afe25f8-cbd6-412b-8698-7b547d1d49ca\") " pod="kube-system/kube-proxy-mc5tv"
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966761    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/219bafbc-d807-44cf-9927-e4957f36ad70-tmp\") pod \"storage-provisioner\" (UID: \"219bafbc-d807-44cf-9927-e4957f36ad70\") " pod="kube-system/storage-provisioner"
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966802    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0161d239-2d85-4246-b2fa-6c7374f2ecd6-cni-cfg\") pod \"kindnet-hhsxh\" (UID: \"0161d239-2d85-4246-b2fa-6c7374f2ecd6\") " pod="kube-system/kindnet-hhsxh"
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966847    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0afe25f8-cbd6-412b-8698-7b547d1d49ca-xtables-lock\") pod \"kube-proxy-mc5tv\" (UID: \"0afe25f8-cbd6-412b-8698-7b547d1d49ca\") " pod="kube-system/kube-proxy-mc5tv"
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966908    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0161d239-2d85-4246-b2fa-6c7374f2ecd6-xtables-lock\") pod \"kindnet-hhsxh\" (UID: \"0161d239-2d85-4246-b2fa-6c7374f2ecd6\") " pod="kube-system/kindnet-hhsxh"
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966943    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0161d239-2d85-4246-b2fa-6c7374f2ecd6-lib-modules\") pod \"kindnet-hhsxh\" (UID: \"0161d239-2d85-4246-b2fa-6c7374f2ecd6\") " pod="kube-system/kindnet-hhsxh"
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.968339    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.968477    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:49.468437755 +0000 UTC m=+6.779274091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.000742    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.000961    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.001575    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:49.501554367 +0000 UTC m=+6.812390603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.773338    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.048369    1532 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c396fd459c503d2e9464c73cc841d3d8" path="/var/lib/kubelet/pods/c396fd459c503d2e9464c73cc841d3d8/volumes"
	I0318 13:11:04.773338    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.051334    1532 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="decc1d942b4d81359bb79c0349ffe9bb" path="/var/lib/kubelet/pods/decc1d942b4d81359bb79c0349ffe9bb/volumes"
	I0318 13:11:04.773338    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.248524    1532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-894400" podStartSLOduration=0.2483832 podCreationTimestamp="2024-03-18 13:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 13:09:49.21292898 +0000 UTC m=+6.523765316" watchObservedRunningTime="2024-03-18 13:09:49.2483832 +0000 UTC m=+6.559219436"
	I0318 13:11:04.773468    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.285710    1532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-894400" podStartSLOduration=0.285684326 podCreationTimestamp="2024-03-18 13:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 13:09:49.252285313 +0000 UTC m=+6.563121649" watchObservedRunningTime="2024-03-18 13:09:49.285684326 +0000 UTC m=+6.596520662"
	I0318 13:11:04.773505    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.471617    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.472236    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:50.471713653 +0000 UTC m=+7.782549889 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.573240    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.573347    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.573459    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:50.573441997 +0000 UTC m=+7.884278233 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.813611    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41035eff3b7db97e56ba25be141bf644acea4ddcb9d92ba3b421e6fa855a09af"
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: I0318 13:09:50.142572    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86d74dec812cfd4842de7473d45ff320bfb2283d09d2d05eee29e45cbde9f5e9"
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: I0318 13:09:50.219092    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9f21749669fe37a2d5bee9e7f45bf84eca6ae55870de8ac18bc774db7f81643"
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.481085    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.481271    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:52.48125246 +0000 UTC m=+9.792088696 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.581790    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.581835    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.581885    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:52.5818703 +0000 UTC m=+9.892706536 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:51 multinode-894400 kubelet[1532]: E0318 13:09:51.011273    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:51 multinode-894400 kubelet[1532]: E0318 13:09:51.012015    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.499973    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.500149    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:56.500131973 +0000 UTC m=+13.810968209 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:04.774087    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.601982    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.774087    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.602006    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.774087    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.602087    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:56.602073317 +0000 UTC m=+13.912909553 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.774198    2404 command_runner.go:130] > Mar 18 13:09:53 multinode-894400 kubelet[1532]: E0318 13:09:53.009672    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774231    2404 command_runner.go:130] > Mar 18 13:09:53 multinode-894400 kubelet[1532]: E0318 13:09:53.010317    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:55 multinode-894400 kubelet[1532]: E0318 13:09:55.010917    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:55 multinode-894400 kubelet[1532]: E0318 13:09:55.011786    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.539408    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.539534    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:10:04.539515204 +0000 UTC m=+21.850351440 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.639919    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.639948    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.639998    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:10:04.639981843 +0000 UTC m=+21.950818079 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:57 multinode-894400 kubelet[1532]: E0318 13:09:57.009521    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:57 multinode-894400 kubelet[1532]: E0318 13:09:57.010257    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:59 multinode-894400 kubelet[1532]: E0318 13:09:59.011021    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:59 multinode-894400 kubelet[1532]: E0318 13:09:59.011361    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:10:01 multinode-894400 kubelet[1532]: E0318 13:10:01.009167    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:10:01 multinode-894400 kubelet[1532]: E0318 13:10:01.009678    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:10:03 multinode-894400 kubelet[1532]: E0318 13:10:03.010168    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:10:03 multinode-894400 kubelet[1532]: E0318 13:10:03.011736    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.603257    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.603387    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:10:20.60337037 +0000 UTC m=+37.914206606 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:04.774825    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.704132    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.774825    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.704169    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.704219    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:10:20.704204798 +0000 UTC m=+38.015041034 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:05 multinode-894400 kubelet[1532]: E0318 13:10:05.009461    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:05 multinode-894400 kubelet[1532]: E0318 13:10:05.010204    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:07 multinode-894400 kubelet[1532]: E0318 13:10:07.009925    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:07 multinode-894400 kubelet[1532]: E0318 13:10:07.010942    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:09 multinode-894400 kubelet[1532]: E0318 13:10:09.010506    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:09 multinode-894400 kubelet[1532]: E0318 13:10:09.011883    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:11 multinode-894400 kubelet[1532]: E0318 13:10:11.009145    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:11 multinode-894400 kubelet[1532]: E0318 13:10:11.011730    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:13 multinode-894400 kubelet[1532]: E0318 13:10:13.010103    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:13 multinode-894400 kubelet[1532]: E0318 13:10:13.010921    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:15 multinode-894400 kubelet[1532]: E0318 13:10:15.009361    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.775462    2404 command_runner.go:130] > Mar 18 13:10:15 multinode-894400 kubelet[1532]: E0318 13:10:15.010565    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.775462    2404 command_runner.go:130] > Mar 18 13:10:17 multinode-894400 kubelet[1532]: E0318 13:10:17.009688    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:17 multinode-894400 kubelet[1532]: E0318 13:10:17.010200    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:19 multinode-894400 kubelet[1532]: E0318 13:10:19.010187    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:19 multinode-894400 kubelet[1532]: E0318 13:10:19.010730    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.639546    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.639747    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:10:52.639723825 +0000 UTC m=+69.950560161 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.740353    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.740517    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.740585    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:10:52.740566824 +0000 UTC m=+70.051403160 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: E0318 13:10:21.010015    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: E0318 13:10:21.011108    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: I0318 13:10:21.647969    1532 scope.go:117] "RemoveContainer" containerID="a2c499223090cc38a7b425469621fb6c8dbc443ab7eb0d5841f1fdcea2922366"
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: I0318 13:10:21.651387    1532 scope.go:117] "RemoveContainer" containerID="46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264"
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: E0318 13:10:21.652104    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(219bafbc-d807-44cf-9927-e4957f36ad70)\"" pod="kube-system/storage-provisioner" podUID="219bafbc-d807-44cf-9927-e4957f36ad70"
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:23 multinode-894400 kubelet[1532]: E0318 13:10:23.010116    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.776085    2404 command_runner.go:130] > Mar 18 13:10:23 multinode-894400 kubelet[1532]: E0318 13:10:23.010816    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.776085    2404 command_runner.go:130] > Mar 18 13:10:23 multinode-894400 kubelet[1532]: I0318 13:10:23.777913    1532 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	I0318 13:11:04.776085    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 kubelet[1532]: I0318 13:10:35.009532    1532 scope.go:117] "RemoveContainer" containerID="46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264"
	I0318 13:11:04.776085    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]: I0318 13:10:43.012571    1532 scope.go:117] "RemoveContainer" containerID="56d1819beb10ed198593d8a369f601faf82bf81ff1aecdbffe7114cd1265351b"
	I0318 13:11:04.776085    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]: E0318 13:10:43.030354    1532 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 13:11:04.776085    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 13:11:04.776085    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 13:11:04.776207    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 13:11:04.776207    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 13:11:04.776207    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]: I0318 13:10:43.056417    1532 scope.go:117] "RemoveContainer" containerID="c51f768a2f642fdffc6de67f101be5abd8bbaec83ef13011b47efab5aad27134"
	I0318 13:11:04.814359    2404 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:11:04.814359    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:11:05.027394    2404 command_runner.go:130] > Name:               multinode-894400
	I0318 13:11:05.028373    2404 command_runner.go:130] > Roles:              control-plane
	I0318 13:11:05.028373    2404 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 13:11:05.028373    2404 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 13:11:05.028373    2404 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 13:11:05.028373    2404 command_runner.go:130] >                     kubernetes.io/hostname=multinode-894400
	I0318 13:11:05.028373    2404 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 13:11:05.028373    2404 command_runner.go:130] >                     minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	I0318 13:11:05.028373    2404 command_runner.go:130] >                     minikube.k8s.io/name=multinode-894400
	I0318 13:11:05.028373    2404 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0318 13:11:05.028449    2404 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_47_29_0700
	I0318 13:11:05.028449    2404 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 13:11:05.028449    2404 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0318 13:11:05.028489    2404 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0318 13:11:05.028489    2404 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 13:11:05.028489    2404 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 13:11:05.028489    2404 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 13:11:05.028489    2404 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:47:24 +0000
	I0318 13:11:05.028489    2404 command_runner.go:130] > Taints:             <none>
	I0318 13:11:05.028489    2404 command_runner.go:130] > Unschedulable:      false
	I0318 13:11:05.028489    2404 command_runner.go:130] > Lease:
	I0318 13:11:05.028489    2404 command_runner.go:130] >   HolderIdentity:  multinode-894400
	I0318 13:11:05.028489    2404 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 13:11:05.028489    2404 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 13:11:00 +0000
	I0318 13:11:05.028489    2404 command_runner.go:130] > Conditions:
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0318 13:11:05.028489    2404 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0318 13:11:05.028489    2404 command_runner.go:130] >   MemoryPressure   False   Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0318 13:11:05.028489    2404 command_runner.go:130] >   DiskPressure     False   Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0318 13:11:05.028489    2404 command_runner.go:130] >   PIDPressure      False   Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Ready            True    Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 13:10:23 +0000   KubeletReady                 kubelet is posting ready status
	I0318 13:11:05.028489    2404 command_runner.go:130] > Addresses:
	I0318 13:11:05.028489    2404 command_runner.go:130] >   InternalIP:  172.30.130.156
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Hostname:    multinode-894400
	I0318 13:11:05.028489    2404 command_runner.go:130] > Capacity:
	I0318 13:11:05.028489    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:05.028489    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:05.028489    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:05.028489    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:05.028489    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:05.028489    2404 command_runner.go:130] > Allocatable:
	I0318 13:11:05.028489    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:05.028489    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:05.028489    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:05.028489    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:05.028489    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:05.028489    2404 command_runner.go:130] > System Info:
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Machine ID:                 80e7b822d2e94d26a09acd4a1bac452b
	I0318 13:11:05.028489    2404 command_runner.go:130] >   System UUID:                5c78c013-e4e8-1041-99c8-95cd760ef34f
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Boot ID:                    a334ae39-1c10-417c-93ad-d28546d7793f
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 13:11:05.028489    2404 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Operating System:           linux
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Architecture:               amd64
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 13:11:05.028489    2404 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0318 13:11:05.029081    2404 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0318 13:11:05.029081    2404 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0318 13:11:05.029081    2404 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 13:11:05.029081    2404 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0318 13:11:05.029081    2404 command_runner.go:130] >   default                     busybox-5b5d89c9d6-c2997                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0318 13:11:05.029081    2404 command_runner.go:130] >   kube-system                 coredns-5dd5756b68-456tm                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0318 13:11:05.029181    2404 command_runner.go:130] >   kube-system                 etcd-multinode-894400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         75s
	I0318 13:11:05.029181    2404 command_runner.go:130] >   kube-system                 kindnet-hhsxh                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0318 13:11:05.029181    2404 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-894400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         75s
	I0318 13:11:05.029181    2404 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-894400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	I0318 13:11:05.029181    2404 command_runner.go:130] >   kube-system                 kube-proxy-mc5tv                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0318 13:11:05.029263    2404 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-894400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	I0318 13:11:05.029263    2404 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0318 13:11:05.029263    2404 command_runner.go:130] > Allocated resources:
	I0318 13:11:05.029263    2404 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 13:11:05.029263    2404 command_runner.go:130] >   Resource           Requests     Limits
	I0318 13:11:05.029348    2404 command_runner.go:130] >   --------           --------     ------
	I0318 13:11:05.029348    2404 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0318 13:11:05.029348    2404 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0318 13:11:05.029348    2404 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0318 13:11:05.029348    2404 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0318 13:11:05.029348    2404 command_runner.go:130] > Events:
	I0318 13:11:05.029348    2404 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 13:11:05.029348    2404 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 13:11:05.029348    2404 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0318 13:11:05.029423    2404 command_runner.go:130] >   Normal  Starting                 74s                kube-proxy       
	I0318 13:11:05.029423    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	I0318 13:11:05.029423    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	I0318 13:11:05.029423    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	I0318 13:11:05.029491    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0318 13:11:05.029491    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m                kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	I0318 13:11:05.029491    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0318 13:11:05.029491    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m                kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	I0318 13:11:05.029558    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m                kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	I0318 13:11:05.029558    2404 command_runner.go:130] >   Normal  Starting                 23m                kubelet          Starting kubelet.
	I0318 13:11:05.029558    2404 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-894400 event: Registered Node multinode-894400 in Controller
	I0318 13:11:05.029558    2404 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-894400 status is now: NodeReady
	I0318 13:11:05.029558    2404 command_runner.go:130] >   Normal  Starting                 82s                kubelet          Starting kubelet.
	I0318 13:11:05.029625    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  81s (x8 over 82s)  kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	I0318 13:11:05.029625    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    81s (x8 over 82s)  kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	I0318 13:11:05.029625    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     81s (x7 over 82s)  kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	I0318 13:11:05.029625    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	I0318 13:11:05.029709    2404 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-894400 event: Registered Node multinode-894400 in Controller
	I0318 13:11:05.029832    2404 command_runner.go:130] > Name:               multinode-894400-m02
	I0318 13:11:05.029832    2404 command_runner.go:130] > Roles:              <none>
	I0318 13:11:05.029832    2404 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 13:11:05.029858    2404 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 13:11:05.029858    2404 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 13:11:05.029888    2404 command_runner.go:130] >                     kubernetes.io/hostname=multinode-894400-m02
	I0318 13:11:05.029888    2404 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 13:11:05.029888    2404 command_runner.go:130] >                     minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	I0318 13:11:05.029888    2404 command_runner.go:130] >                     minikube.k8s.io/name=multinode-894400
	I0318 13:11:05.029888    2404 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 13:11:05.029888    2404 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_50_35_0700
	I0318 13:11:05.029888    2404 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 13:11:05.029888    2404 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 13:11:05.029985    2404 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 13:11:05.029985    2404 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 13:11:05.029985    2404 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:50:34 +0000
	I0318 13:11:05.029985    2404 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 13:11:05.030096    2404 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 13:11:05.030096    2404 command_runner.go:130] > Unschedulable:      false
	I0318 13:11:05.030096    2404 command_runner.go:130] > Lease:
	I0318 13:11:05.030096    2404 command_runner.go:130] >   HolderIdentity:  multinode-894400-m02
	I0318 13:11:05.030096    2404 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 13:11:05.030096    2404 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 13:06:44 +0000
	I0318 13:11:05.030096    2404 command_runner.go:130] > Conditions:
	I0318 13:11:05.030096    2404 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 13:11:05.030096    2404 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 13:11:05.030096    2404 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:05.030218    2404 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:05.030218    2404 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:05.030218    2404 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:05.030218    2404 command_runner.go:130] > Addresses:
	I0318 13:11:05.030218    2404 command_runner.go:130] >   InternalIP:  172.30.140.66
	I0318 13:11:05.030218    2404 command_runner.go:130] >   Hostname:    multinode-894400-m02
	I0318 13:11:05.030218    2404 command_runner.go:130] > Capacity:
	I0318 13:11:05.030218    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:05.030218    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:05.030218    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:05.030218    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:05.030218    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:05.030346    2404 command_runner.go:130] > Allocatable:
	I0318 13:11:05.030346    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:05.030346    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:05.030346    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:05.030346    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:05.030396    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:05.030396    2404 command_runner.go:130] > System Info:
	I0318 13:11:05.030396    2404 command_runner.go:130] >   Machine ID:                 209753fe156d43e08ee40e815598ed17
	I0318 13:11:05.030396    2404 command_runner.go:130] >   System UUID:                fa19d46a-a3a2-9249-8c21-1edbfcedff01
	I0318 13:11:05.030396    2404 command_runner.go:130] >   Boot ID:                    0e15b7cf-29d6-40f7-ad78-fb04b10bea99
	I0318 13:11:05.030396    2404 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 13:11:05.030396    2404 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 13:11:05.030396    2404 command_runner.go:130] >   Operating System:           linux
	I0318 13:11:05.030396    2404 command_runner.go:130] >   Architecture:               amd64
	I0318 13:11:05.030396    2404 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 13:11:05.030396    2404 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 13:11:05.030495    2404 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 13:11:05.030495    2404 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0318 13:11:05.030495    2404 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0318 13:11:05.030495    2404 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0318 13:11:05.030495    2404 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 13:11:05.030495    2404 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0318 13:11:05.030495    2404 command_runner.go:130] >   default                     busybox-5b5d89c9d6-8btgf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0318 13:11:05.030495    2404 command_runner.go:130] >   kube-system                 kindnet-k5lpg               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0318 13:11:05.030591    2404 command_runner.go:130] >   kube-system                 kube-proxy-8bdmn            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0318 13:11:05.030591    2404 command_runner.go:130] > Allocated resources:
	I0318 13:11:05.030591    2404 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 13:11:05.030591    2404 command_runner.go:130] >   Resource           Requests   Limits
	I0318 13:11:05.030591    2404 command_runner.go:130] >   --------           --------   ------
	I0318 13:11:05.030591    2404 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0318 13:11:05.030591    2404 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0318 13:11:05.030591    2404 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0318 13:11:05.030679    2404 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0318 13:11:05.030679    2404 command_runner.go:130] > Events:
	I0318 13:11:05.030679    2404 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 13:11:05.030679    2404 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 13:11:05.030679    2404 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0318 13:11:05.030765    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x5 over 20m)  kubelet          Node multinode-894400-m02 status is now: NodeHasSufficientMemory
	I0318 13:11:05.030765    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x5 over 20m)  kubelet          Node multinode-894400-m02 status is now: NodeHasNoDiskPressure
	I0318 13:11:05.030765    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x5 over 20m)  kubelet          Node multinode-894400-m02 status is now: NodeHasSufficientPID
	I0318 13:11:05.030765    2404 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller
	I0318 13:11:05.030765    2404 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-894400-m02 status is now: NodeReady
	I0318 13:11:05.030838    2404 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller
	I0318 13:11:05.030838    2404 command_runner.go:130] >   Normal  NodeNotReady             23s                node-controller  Node multinode-894400-m02 status is now: NodeNotReady
	I0318 13:11:05.030838    2404 command_runner.go:130] > Name:               multinode-894400-m03
	I0318 13:11:05.030838    2404 command_runner.go:130] > Roles:              <none>
	I0318 13:11:05.030838    2404 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 13:11:05.030838    2404 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 13:11:05.030838    2404 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 13:11:05.030838    2404 command_runner.go:130] >                     kubernetes.io/hostname=multinode-894400-m03
	I0318 13:11:05.030937    2404 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 13:11:05.030937    2404 command_runner.go:130] >                     minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	I0318 13:11:05.030937    2404 command_runner.go:130] >                     minikube.k8s.io/name=multinode-894400
	I0318 13:11:05.030937    2404 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 13:11:05.030937    2404 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T13_05_26_0700
	I0318 13:11:05.030937    2404 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 13:11:05.031048    2404 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 13:11:05.031048    2404 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 13:11:05.031048    2404 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 13:11:05.031048    2404 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 13:05:25 +0000
	I0318 13:11:05.031048    2404 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 13:11:05.031048    2404 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 13:11:05.031048    2404 command_runner.go:130] > Unschedulable:      false
	I0318 13:11:05.031048    2404 command_runner.go:130] > Lease:
	I0318 13:11:05.031048    2404 command_runner.go:130] >   HolderIdentity:  multinode-894400-m03
	I0318 13:11:05.031048    2404 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 13:11:05.031048    2404 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 13:06:27 +0000
	I0318 13:11:05.031048    2404 command_runner.go:130] > Conditions:
	I0318 13:11:05.031048    2404 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 13:11:05.031048    2404 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 13:11:05.031048    2404 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:05.031048    2404 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:05.031048    2404 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:05.031048    2404 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:05.031048    2404 command_runner.go:130] > Addresses:
	I0318 13:11:05.031048    2404 command_runner.go:130] >   InternalIP:  172.30.137.140
	I0318 13:11:05.031299    2404 command_runner.go:130] >   Hostname:    multinode-894400-m03
	I0318 13:11:05.031299    2404 command_runner.go:130] > Capacity:
	I0318 13:11:05.031299    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:05.031299    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:05.031299    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:05.031299    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:05.031299    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:05.031299    2404 command_runner.go:130] > Allocatable:
	I0318 13:11:05.031299    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:05.031299    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:05.031299    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:05.031299    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:05.031410    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:05.031410    2404 command_runner.go:130] > System Info:
	I0318 13:11:05.031410    2404 command_runner.go:130] >   Machine ID:                 f96e7421441b46c0a5836e2d53b26708
	I0318 13:11:05.031410    2404 command_runner.go:130] >   System UUID:                7dae14c5-92ae-d842-8ce6-c446c0352eb2
	I0318 13:11:05.031410    2404 command_runner.go:130] >   Boot ID:                    7ef4b157-1893-48d2-9b87-d5f210c11477
	I0318 13:11:05.031410    2404 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 13:11:05.031410    2404 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 13:11:05.031495    2404 command_runner.go:130] >   Operating System:           linux
	I0318 13:11:05.031495    2404 command_runner.go:130] >   Architecture:               amd64
	I0318 13:11:05.031495    2404 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 13:11:05.031495    2404 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 13:11:05.031495    2404 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 13:11:05.031495    2404 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0318 13:11:05.031495    2404 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0318 13:11:05.031495    2404 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0318 13:11:05.031495    2404 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 13:11:05.031605    2404 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0318 13:11:05.031605    2404 command_runner.go:130] >   kube-system                 kindnet-zv9tv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0318 13:11:05.031605    2404 command_runner.go:130] >   kube-system                 kube-proxy-745w9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0318 13:11:05.031605    2404 command_runner.go:130] > Allocated resources:
	I0318 13:11:05.031605    2404 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 13:11:05.031605    2404 command_runner.go:130] >   Resource           Requests   Limits
	I0318 13:11:05.031605    2404 command_runner.go:130] >   --------           --------   ------
	I0318 13:11:05.031605    2404 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0318 13:11:05.031605    2404 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0318 13:11:05.031605    2404 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0318 13:11:05.031739    2404 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0318 13:11:05.031739    2404 command_runner.go:130] > Events:
	I0318 13:11:05.031739    2404 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0318 13:11:05.031739    2404 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0318 13:11:05.031739    2404 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0318 13:11:05.031739    2404 command_runner.go:130] >   Normal  Starting                 5m36s                  kube-proxy       
	I0318 13:11:05.031856    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x5 over 16m)      kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientMemory
	I0318 13:11:05.031856    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x5 over 16m)      kubelet          Node multinode-894400-m03 status is now: NodeHasNoDiskPressure
	I0318 13:11:05.031856    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x5 over 16m)      kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientPID
	I0318 13:11:05.031856    2404 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-894400-m03 status is now: NodeReady
	I0318 13:11:05.031856    2404 command_runner.go:130] >   Normal  Starting                 5m40s                  kubelet          Starting kubelet.
	I0318 13:11:05.031856    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m40s (x2 over 5m40s)  kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientMemory
	I0318 13:11:05.031856    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m40s (x2 over 5m40s)  kubelet          Node multinode-894400-m03 status is now: NodeHasNoDiskPressure
	I0318 13:11:05.031958    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m40s (x2 over 5m40s)  kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientPID
	I0318 13:11:05.031958    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m40s                  kubelet          Updated Node Allocatable limit across pods
	I0318 13:11:05.031958    2404 command_runner.go:130] >   Normal  RegisteredNode           5m39s                  node-controller  Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller
	I0318 13:11:05.031958    2404 command_runner.go:130] >   Normal  NodeReady                5m31s                  kubelet          Node multinode-894400-m03 status is now: NodeReady
	I0318 13:11:05.031958    2404 command_runner.go:130] >   Normal  NodeNotReady             3m54s                  node-controller  Node multinode-894400-m03 status is now: NodeNotReady
	I0318 13:11:05.031958    2404 command_runner.go:130] >   Normal  RegisteredNode           64s                    node-controller  Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller
	I0318 13:11:05.041887    2404 logs.go:123] Gathering logs for etcd [5f0887d1e691] ...
	I0318 13:11:05.042817    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0887d1e691"
	I0318 13:11:05.069165    2404 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T13:09:44.778754Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 13:11:05.069606    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.779618Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.30.130.156:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.30.130.156:2380","--initial-cluster=multinode-894400=https://172.30.130.156:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.30.130.156:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.30.130.156:2380","--name=multinode-894400","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0318 13:11:05.074954    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.780287Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0318 13:11:05.074988    2404 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T13:09:44.780316Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 13:11:05.074988    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.780326Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.30.130.156:2380"]}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.780518Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.782775Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.30.130.156:2379"]}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.785511Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-894400","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.30.130.156:2380"],"listen-peer-urls":["https://172.30.130.156:2380"],"advertise-client-urls":["https://172.30.130.156:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.30.130.156:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"init
ial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.809621Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"22.951578ms"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.849189Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.872854Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"2db881e830cc2153","local-member-id":"c2557cd98fa8d31a","commit-index":1981}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.87358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a switched to configuration voters=()"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.873736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became follower at term 2"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.873929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft c2557cd98fa8d31a [peers: [], term: 2, commit: 1981, applied: 0, lastindex: 1981, lastterm: 2]"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T13:09:44.887865Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.892732Z","caller":"mvcc/kvstore.go:323","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1376}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.89955Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1715}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.914592Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.926835Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"c2557cd98fa8d31a","timeout":"7s"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.928545Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"c2557cd98fa8d31a"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.930225Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"c2557cd98fa8d31a","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.930859Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.931762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a switched to configuration voters=(14003235890238378778)"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.932128Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2db881e830cc2153","local-member-id":"c2557cd98fa8d31a","added-peer-id":"c2557cd98fa8d31a","added-peer-peer-urls":["https://172.30.129.141:2380"]}
	I0318 13:11:05.075728    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.933388Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2db881e830cc2153","local-member-id":"c2557cd98fa8d31a","cluster-version":"3.5"}
	I0318 13:11:05.075728    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.933717Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0318 13:11:05.075804    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.946226Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0318 13:11:05.075804    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.947818Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0318 13:11:05.075907    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.948803Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0318 13:11:05.075947    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.954567Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.954988Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"c2557cd98fa8d31a","initial-advertise-peer-urls":["https://172.30.130.156:2380"],"listen-peer-urls":["https://172.30.130.156:2380"],"advertise-client-urls":["https://172.30.130.156:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.30.130.156:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.955173Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.954599Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.30.130.156:2380"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.956126Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.30.130.156:2380"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a is starting a new election at term 2"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became pre-candidate at term 2"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a received MsgPreVoteResp from c2557cd98fa8d31a at term 2"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became candidate at term 3"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.77574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a received MsgVoteResp from c2557cd98fa8d31a at term 3"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became leader at term 3"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c2557cd98fa8d31a elected leader c2557cd98fa8d31a at term 3"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.782683Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c2557cd98fa8d31a","local-member-attributes":"{Name:multinode-894400 ClientURLs:[https://172.30.130.156:2379]}","request-path":"/0/members/c2557cd98fa8d31a/attributes","cluster-id":"2db881e830cc2153","publish-timeout":"7s"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.78269Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.782706Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.783976Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.30.130.156:2379"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.783993Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.788664Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.788817Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0318 13:11:05.081940    2404 logs.go:123] Gathering logs for kube-scheduler [e4d42739ce0e] ...
	I0318 13:11:05.081940    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4d42739ce0e"
	I0318 13:11:05.108082    2404 command_runner.go:130] ! I0318 12:47:23.427784       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:11:05.108386    2404 command_runner.go:130] ! W0318 12:47:24.381993       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 13:11:05.108386    2404 command_runner.go:130] ! W0318 12:47:24.382186       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:05.108699    2404 command_runner.go:130] ! W0318 12:47:24.382237       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 13:11:05.109472    2404 command_runner.go:130] ! W0318 12:47:24.382251       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:11:05.109653    2404 command_runner.go:130] ! I0318 12:47:24.461225       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 13:11:05.109721    2404 command_runner.go:130] ! I0318 12:47:24.461511       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:05.110472    2404 command_runner.go:130] ! I0318 12:47:24.465946       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 13:11:05.110621    2404 command_runner.go:130] ! I0318 12:47:24.466246       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:11:05.110621    2404 command_runner.go:130] ! I0318 12:47:24.466280       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:11:05.110655    2404 command_runner.go:130] ! I0318 12:47:24.473793       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:11:05.110655    2404 command_runner.go:130] ! W0318 12:47:24.487135       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:05.110741    2404 command_runner.go:130] ! E0318 12:47:24.487240       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:05.110741    2404 command_runner.go:130] ! W0318 12:47:24.519325       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:11:05.110799    2404 command_runner.go:130] ! E0318 12:47:24.519853       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:11:05.110799    2404 command_runner.go:130] ! W0318 12:47:24.520361       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:11:05.110859    2404 command_runner.go:130] ! E0318 12:47:24.520484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:11:05.110859    2404 command_runner.go:130] ! W0318 12:47:24.520711       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:11:05.110924    2404 command_runner.go:130] ! E0318 12:47:24.522735       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:11:05.110988    2404 command_runner.go:130] ! W0318 12:47:24.523312       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:11:05.110988    2404 command_runner.go:130] ! E0318 12:47:24.523462       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:11:05.111044    2404 command_runner.go:130] ! W0318 12:47:24.523710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:11:05.111044    2404 command_runner.go:130] ! E0318 12:47:24.523900       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:11:05.111044    2404 command_runner.go:130] ! W0318 12:47:24.524226       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.111117    2404 command_runner.go:130] ! E0318 12:47:24.524422       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.111117    2404 command_runner.go:130] ! W0318 12:47:24.524710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:11:05.111172    2404 command_runner.go:130] ! E0318 12:47:24.525125       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:11:05.111235    2404 command_runner.go:130] ! W0318 12:47:24.525523       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.111235    2404 command_runner.go:130] ! E0318 12:47:24.525746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.111289    2404 command_runner.go:130] ! W0318 12:47:24.526240       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:11:05.111289    2404 command_runner.go:130] ! E0318 12:47:24.526443       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:11:05.111377    2404 command_runner.go:130] ! W0318 12:47:24.526703       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 13:11:05.111377    2404 command_runner.go:130] ! E0318 12:47:24.526852       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 13:11:05.111432    2404 command_runner.go:130] ! W0318 12:47:24.527382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:11:05.111535    2404 command_runner.go:130] ! E0318 12:47:24.527873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:11:05.111535    2404 command_runner.go:130] ! W0318 12:47:24.528117       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:11:05.111606    2404 command_runner.go:130] ! E0318 12:47:24.528748       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:11:05.111606    2404 command_runner.go:130] ! W0318 12:47:24.529179       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.111668    2404 command_runner.go:130] ! E0318 12:47:24.529832       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.111668    2404 command_runner.go:130] ! W0318 12:47:24.530406       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.111739    2404 command_runner.go:130] ! E0318 12:47:24.532696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.111739    2404 command_runner.go:130] ! W0318 12:47:25.371082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.111739    2404 command_runner.go:130] ! E0318 12:47:25.371130       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.111801    2404 command_runner.go:130] ! W0318 12:47:25.374605       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:11:05.111855    2404 command_runner.go:130] ! E0318 12:47:25.374678       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:11:05.111855    2404 command_runner.go:130] ! W0318 12:47:25.400777       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:11:05.111916    2404 command_runner.go:130] ! E0318 12:47:25.400820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:11:05.111916    2404 command_runner.go:130] ! W0318 12:47:25.434442       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:05.111977    2404 command_runner.go:130] ! E0318 12:47:25.434526       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:05.111977    2404 command_runner.go:130] ! W0318 12:47:25.456878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:11:05.111977    2404 command_runner.go:130] ! E0318 12:47:25.457121       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:11:05.112053    2404 command_runner.go:130] ! W0318 12:47:25.744652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:11:05.112107    2404 command_runner.go:130] ! E0318 12:47:25.744733       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:11:05.112107    2404 command_runner.go:130] ! W0318 12:47:25.777073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.112185    2404 command_runner.go:130] ! E0318 12:47:25.777145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.112185    2404 command_runner.go:130] ! W0318 12:47:25.850949       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:11:05.112241    2404 command_runner.go:130] ! E0318 12:47:25.850985       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:11:05.112241    2404 command_runner.go:130] ! W0318 12:47:25.876908       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:11:05.112300    2404 command_runner.go:130] ! E0318 12:47:25.877170       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:11:05.112300    2404 command_runner.go:130] ! W0318 12:47:25.892072       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:11:05.112372    2404 command_runner.go:130] ! E0318 12:47:25.892099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:11:05.112432    2404 command_runner.go:130] ! W0318 12:47:25.988864       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:11:05.112503    2404 command_runner.go:130] ! E0318 12:47:25.988912       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:11:05.112503    2404 command_runner.go:130] ! W0318 12:47:26.044749       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:11:05.112563    2404 command_runner.go:130] ! E0318 12:47:26.044834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:11:05.112563    2404 command_runner.go:130] ! W0318 12:47:26.067659       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.112625    2404 command_runner.go:130] ! E0318 12:47:26.068250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.112625    2404 command_runner.go:130] ! I0318 12:47:28.178584       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:11:05.112625    2404 command_runner.go:130] ! I0318 13:07:24.107367       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0318 13:11:05.112686    2404 command_runner.go:130] ! I0318 13:07:24.107975       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0318 13:11:05.112686    2404 command_runner.go:130] ! E0318 13:07:24.108193       1 run.go:74] "command failed" err="finished without leader elect"
	I0318 13:11:05.123795    2404 logs.go:123] Gathering logs for kube-controller-manager [7aa5cf4ec378] ...
	I0318 13:11:05.123795    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa5cf4ec378"
	I0318 13:11:05.151527    2404 command_runner.go:130] ! I0318 12:47:22.447675       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:11:05.152291    2404 command_runner.go:130] ! I0318 12:47:22.964394       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 13:11:05.152291    2404 command_runner.go:130] ! I0318 12:47:22.964509       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:05.152291    2404 command_runner.go:130] ! I0318 12:47:22.966671       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:11:05.152291    2404 command_runner.go:130] ! I0318 12:47:22.967091       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:11:05.152427    2404 command_runner.go:130] ! I0318 12:47:22.968348       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 13:11:05.152427    2404 command_runner.go:130] ! I0318 12:47:22.969286       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:11:05.152427    2404 command_runner.go:130] ! I0318 12:47:27.391471       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 13:11:05.152472    2404 command_runner.go:130] ! I0318 12:47:27.423488       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 13:11:05.152472    2404 command_runner.go:130] ! I0318 12:47:27.424256       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:27.424289       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:27.424374       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:27.451725       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:27.451967       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:27.452425       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:27.464873       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:27.465150       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:27.465172       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:27.491949       1 shared_informer.go:318] Caches are synced for tokens
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.491900       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.492009       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.492602       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.492659       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 13:11:05.152502    2404 command_runner.go:130] ! E0318 12:47:37.494780       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.494859       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.511992       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.512162       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.512576       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.525022       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.525273       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.525287       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.540701       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.540905       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.540914       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 13:11:05.153456    2404 command_runner.go:130] ! I0318 12:47:37.562000       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 13:11:05.153747    2404 command_runner.go:130] ! I0318 12:47:37.562256       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 13:11:05.153747    2404 command_runner.go:130] ! I0318 12:47:37.562286       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 13:11:05.153956    2404 command_runner.go:130] ! I0318 12:47:37.574397       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 13:11:05.154090    2404 command_runner.go:130] ! I0318 12:47:37.574869       1 expand_controller.go:328] "Starting expand controller"
	I0318 13:11:05.154090    2404 command_runner.go:130] ! I0318 12:47:37.574937       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 13:11:05.154090    2404 command_runner.go:130] ! I0318 12:47:37.587914       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 13:11:05.154090    2404 command_runner.go:130] ! I0318 12:47:37.588166       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 13:11:05.154090    2404 command_runner.go:130] ! I0318 12:47:37.588199       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 13:11:05.154184    2404 command_runner.go:130] ! I0318 12:47:37.609721       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 13:11:05.154184    2404 command_runner.go:130] ! I0318 12:47:37.615354       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 13:11:05.154184    2404 command_runner.go:130] ! I0318 12:47:37.615371       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 13:11:05.154184    2404 command_runner.go:130] ! I0318 12:47:37.624660       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 13:11:05.154184    2404 command_runner.go:130] ! I0318 12:47:37.624898       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 13:11:05.154184    2404 command_runner.go:130] ! I0318 12:47:37.625063       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 13:11:05.154184    2404 command_runner.go:130] ! I0318 12:47:37.637461       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 13:11:05.154343    2404 command_runner.go:130] ! I0318 12:47:37.637588       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 13:11:05.154423    2404 command_runner.go:130] ! I0318 12:47:37.637699       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 13:11:05.154523    2404 command_runner.go:130] ! I0318 12:47:37.649314       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 13:11:05.154523    2404 command_runner.go:130] ! I0318 12:47:37.650380       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 13:11:05.154523    2404 command_runner.go:130] ! I0318 12:47:37.650462       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 13:11:05.154523    2404 command_runner.go:130] ! I0318 12:47:37.830447       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 13:11:05.154523    2404 command_runner.go:130] ! I0318 12:47:37.830565       1 disruption.go:433] "Sending events to api server."
	I0318 13:11:05.154636    2404 command_runner.go:130] ! I0318 12:47:37.830686       1 disruption.go:444] "Starting disruption controller"
	I0318 13:11:05.154636    2404 command_runner.go:130] ! I0318 12:47:37.830725       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 13:11:05.154636    2404 command_runner.go:130] ! I0318 12:47:37.985254       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 13:11:05.154636    2404 command_runner.go:130] ! I0318 12:47:37.985453       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 13:11:05.154636    2404 command_runner.go:130] ! I0318 12:47:37.985784       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 13:11:05.154636    2404 command_runner.go:130] ! I0318 12:47:38.288543       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 13:11:05.154636    2404 command_runner.go:130] ! I0318 12:47:38.289132       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 13:11:05.154722    2404 command_runner.go:130] ! I0318 12:47:38.289248       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 13:11:05.154747    2404 command_runner.go:130] ! I0318 12:47:38.289520       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 13:11:05.154747    2404 command_runner.go:130] ! I0318 12:47:38.289722       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 13:11:05.154747    2404 command_runner.go:130] ! I0318 12:47:38.289927       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 13:11:05.154857    2404 command_runner.go:130] ! I0318 12:47:38.290240       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 13:11:05.154882    2404 command_runner.go:130] ! I0318 12:47:38.290340       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 13:11:05.154934    2404 command_runner.go:130] ! I0318 12:47:38.290418       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 13:11:05.154934    2404 command_runner.go:130] ! I0318 12:47:38.290502       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 13:11:05.154967    2404 command_runner.go:130] ! I0318 12:47:38.290550       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 13:11:05.154967    2404 command_runner.go:130] ! I0318 12:47:38.290591       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 13:11:05.155005    2404 command_runner.go:130] ! I0318 12:47:38.290851       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 13:11:05.155045    2404 command_runner.go:130] ! I0318 12:47:38.291026       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 13:11:05.155045    2404 command_runner.go:130] ! I0318 12:47:38.291117       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.291149       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.291277       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.291315       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.291392       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.291423       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.291465       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.291591       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.291607       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.291720       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.436018       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.436093       1 job_controller.go:226] "Starting job controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.436112       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.731490       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.731606       1 horizontal.go:200] "Starting HPA controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.731671       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.886224       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.886401       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.886705       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.930325       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.930354       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.930362       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.930398       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:39.085782       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:39.085905       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:39.085920       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 13:11:05.155616    2404 command_runner.go:130] ! I0318 12:47:39.236755       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 13:11:05.155616    2404 command_runner.go:130] ! I0318 12:47:39.237434       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 13:11:05.155616    2404 command_runner.go:130] ! I0318 12:47:39.237522       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 13:11:05.155616    2404 command_runner.go:130] ! I0318 12:47:39.390953       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 13:11:05.155837    2404 command_runner.go:130] ! I0318 12:47:39.391480       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 13:11:05.155837    2404 command_runner.go:130] ! I0318 12:47:39.391646       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 13:11:05.155889    2404 command_runner.go:130] ! I0318 12:47:39.535570       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 13:11:05.155889    2404 command_runner.go:130] ! I0318 12:47:39.536071       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 13:11:05.155889    2404 command_runner.go:130] ! I0318 12:47:39.536172       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 13:11:05.155889    2404 command_runner.go:130] ! I0318 12:47:39.582776       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 13:11:05.155889    2404 command_runner.go:130] ! I0318 12:47:39.582876       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 13:11:05.155969    2404 command_runner.go:130] ! I0318 12:47:39.582912       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:05.155969    2404 command_runner.go:130] ! I0318 12:47:39.584602       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 13:11:05.156009    2404 command_runner.go:130] ! I0318 12:47:39.584677       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 13:11:05.156009    2404 command_runner.go:130] ! I0318 12:47:39.584724       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:05.156009    2404 command_runner.go:130] ! I0318 12:47:39.585974       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 13:11:05.156050    2404 command_runner.go:130] ! I0318 12:47:39.585990       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 13:11:05.156080    2404 command_runner.go:130] ! I0318 12:47:39.586012       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:05.156080    2404 command_runner.go:130] ! I0318 12:47:39.586910       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 13:11:05.156080    2404 command_runner.go:130] ! I0318 12:47:39.586968       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 13:11:05.156120    2404 command_runner.go:130] ! I0318 12:47:39.586975       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 13:11:05.156120    2404 command_runner.go:130] ! I0318 12:47:39.587044       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:05.156154    2404 command_runner.go:130] ! I0318 12:47:39.735265       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:39.735467       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:39.735494       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:39.735502       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:39.783594       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:39.783722       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:39.783841       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:39.783860       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:39.784031       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 13:11:05.156186    2404 command_runner.go:130] ! E0318 12:47:39.937206       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:39.937229       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.089508       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.089701       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.089793       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.235860       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.235977       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.236063       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.386545       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.386692       1 gc_controller.go:101] "Starting GC controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.386704       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.644175       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.644284       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.644293       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.784991       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:40.785464       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:40.785492       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:40.936785       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:40.939800       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:40.947184       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:40.968017       1 shared_informer.go:318] Caches are synced for service account
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:40.971773       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:40.976691       1 shared_informer.go:318] Caches are synced for expand
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:40.986014       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:40.995675       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:41.009015       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400\" does not exist"
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:41.012612       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:41.016383       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:41.025198       1 shared_informer.go:318] Caches are synced for TTL
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:41.025462       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:41.032086       1 shared_informer.go:318] Caches are synced for HPA
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:41.036463       1 shared_informer.go:318] Caches are synced for job
	I0318 13:11:05.157395    2404 command_runner.go:130] ! I0318 12:47:41.036622       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 13:11:05.157451    2404 command_runner.go:130] ! I0318 12:47:41.036726       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 13:11:05.157451    2404 command_runner.go:130] ! I0318 12:47:41.037735       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.037818       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.040360       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.041850       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.045379       1 shared_informer.go:318] Caches are synced for namespace
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.051530       1 shared_informer.go:318] Caches are synced for deployment
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.053151       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.063027       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.084212       1 shared_informer.go:318] Caches are synced for taint
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.084612       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.087983       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.088464       1 taint_manager.go:210] "Sending events to api server"
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.089485       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400"
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.089526       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.089552       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.089942       1 shared_informer.go:318] Caches are synced for GC
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.090031       1 event.go:307] "Event occurred" object="multinode-894400" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400 event: Registered Node multinode-894400 in Controller"
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.090167       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.090848       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.092093       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.092684       1 shared_informer.go:318] Caches are synced for node
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.093255       1 range_allocator.go:174] "Sending events to api server"
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.093537       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.093851       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.093958       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.119414       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400" podCIDRs=["10.244.0.0/24"]
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.148134       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.183853       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.184949       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.186043       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.187192       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.187229       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.192066       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.233781       1 shared_informer.go:318] Caches are synced for disruption
	I0318 13:11:05.158014    2404 command_runner.go:130] ! I0318 12:47:41.572914       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:11:05.158014    2404 command_runner.go:130] ! I0318 12:47:41.612936       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mc5tv"
	I0318 13:11:05.158084    2404 command_runner.go:130] ! I0318 12:47:41.615780       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hhsxh"
	I0318 13:11:05.158084    2404 command_runner.go:130] ! I0318 12:47:41.625871       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:41.626335       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:41.893141       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:42.112244       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vl6jr"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:42.148022       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-456tm"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:42.181940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="289.6659ms"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:42.245823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.840303ms"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:42.246151       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.996µs"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:42.470958       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:42.530265       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-vl6jr"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:42.551794       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.491503ms"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:42.587026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="35.184179ms"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:42.587126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.497µs"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:52.958102       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="163.297µs"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:52.991751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.399µs"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:54.194916       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.289µs"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:55.238088       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.595936ms"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:55.238222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="45.592µs"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:56.090728       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:50:34.419485       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m02\" does not exist"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:50:34.437576       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m02" podCIDRs=["10.244.1.0/24"]
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:50:34.454919       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8bdmn"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:50:34.479103       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k5lpg"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:50:36.121925       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m02"
	I0318 13:11:05.158650    2404 command_runner.go:130] ! I0318 12:50:36.122368       1 event.go:307] "Event occurred" object="multinode-894400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller"
	I0318 13:11:05.158650    2404 command_runner.go:130] ! I0318 12:50:52.539955       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:05.158725    2404 command_runner.go:130] ! I0318 12:51:17.964827       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0318 13:11:05.158725    2404 command_runner.go:130] ! I0318 12:51:17.986964       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-8btgf"
	I0318 13:11:05.158797    2404 command_runner.go:130] ! I0318 12:51:18.004592       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-c2997"
	I0318 13:11:05.158797    2404 command_runner.go:130] ! I0318 12:51:18.026894       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="62.79508ms"
	I0318 13:11:05.158857    2404 command_runner.go:130] ! I0318 12:51:18.045074       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="17.513513ms"
	I0318 13:11:05.158857    2404 command_runner.go:130] ! I0318 12:51:18.046404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="36.101µs"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:51:18.054157       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="337.914µs"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:51:18.060516       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="26.701µs"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:51:20.804047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="10.125602ms"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:51:20.804333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="159.502µs"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:51:21.064706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="11.788417ms"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:51:21.065229       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="82.401µs"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:55:05.793350       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m03\" does not exist"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:55:05.797095       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:55:05.823205       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zv9tv"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:55:05.835101       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m03" podCIDRs=["10.244.2.0/24"]
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:55:05.835149       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-745w9"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:55:06.188986       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:55:06.188988       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m03"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:55:23.671742       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 13:02:46.325539       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 13:02:46.325935       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-894400-m03 status is now: NodeNotReady"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 13:02:46.344510       1 event.go:307] "Event occurred" object="kube-system/kindnet-zv9tv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 13:02:46.368811       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-745w9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 13:05:19.649225       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 13:05:21.403124       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-894400-m03 event: Removing Node multinode-894400-m03 from Controller"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 13:05:25.832056       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m03\" does not exist"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 13:05:25.832348       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 13:05:25.841443       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m03" podCIDRs=["10.244.3.0/24"]
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 13:05:26.404299       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller"
	I0318 13:11:05.159614    2404 command_runner.go:130] ! I0318 13:05:34.080951       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:05.159743    2404 command_runner.go:130] ! I0318 13:07:11.961036       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-894400-m03 status is now: NodeNotReady"
	I0318 13:11:05.159743    2404 command_runner.go:130] ! I0318 13:07:11.961077       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:05.159807    2404 command_runner.go:130] ! I0318 13:07:12.051526       1 event.go:307] "Event occurred" object="kube-system/kindnet-zv9tv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:05.159807    2404 command_runner.go:130] ! I0318 13:07:12.098168       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-745w9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:05.175954    2404 logs.go:123] Gathering logs for kindnet [c4d7018ad23a] ...
	I0318 13:11:05.175954    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d7018ad23a"
	I0318 13:11:05.208668    2404 command_runner.go:130] ! I0318 12:56:20.031595       1 main.go:227] handling current node
	I0318 13:11:05.208668    2404 command_runner.go:130] ! I0318 12:56:20.031610       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.208668    2404 command_runner.go:130] ! I0318 12:56:20.031618       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.208668    2404 command_runner.go:130] ! I0318 12:56:20.031800       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.208668    2404 command_runner.go:130] ! I0318 12:56:20.031837       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.208668    2404 command_runner.go:130] ! I0318 12:56:30.038705       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.209637    2404 command_runner.go:130] ! I0318 12:56:30.038812       1 main.go:227] handling current node
	I0318 13:11:05.209637    2404 command_runner.go:130] ! I0318 12:56:30.038826       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.209637    2404 command_runner.go:130] ! I0318 12:56:30.038833       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.209637    2404 command_runner.go:130] ! I0318 12:56:30.039027       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.209637    2404 command_runner.go:130] ! I0318 12:56:30.039347       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.209637    2404 command_runner.go:130] ! I0318 12:56:40.051950       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.209637    2404 command_runner.go:130] ! I0318 12:56:40.052053       1 main.go:227] handling current node
	I0318 13:11:05.209637    2404 command_runner.go:130] ! I0318 12:56:40.052086       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.209637    2404 command_runner.go:130] ! I0318 12:56:40.052204       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.209637    2404 command_runner.go:130] ! I0318 12:56:40.052568       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.210751    2404 command_runner.go:130] ! I0318 12:56:40.052681       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.210831    2404 command_runner.go:130] ! I0318 12:56:50.074059       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.210831    2404 command_runner.go:130] ! I0318 12:56:50.074164       1 main.go:227] handling current node
	I0318 13:11:05.210859    2404 command_runner.go:130] ! I0318 12:56:50.074183       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.210885    2404 command_runner.go:130] ! I0318 12:56:50.074192       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.210885    2404 command_runner.go:130] ! I0318 12:56:50.075009       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.210923    2404 command_runner.go:130] ! I0318 12:56:50.075306       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.210923    2404 command_runner.go:130] ! I0318 12:57:00.089286       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.210973    2404 command_runner.go:130] ! I0318 12:57:00.089382       1 main.go:227] handling current node
	I0318 13:11:05.210973    2404 command_runner.go:130] ! I0318 12:57:00.089397       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.211011    2404 command_runner.go:130] ! I0318 12:57:00.089405       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.211011    2404 command_runner.go:130] ! I0318 12:57:00.089918       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.211011    2404 command_runner.go:130] ! I0318 12:57:00.089934       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.211055    2404 command_runner.go:130] ! I0318 12:57:10.103457       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.211055    2404 command_runner.go:130] ! I0318 12:57:10.103575       1 main.go:227] handling current node
	I0318 13:11:05.211055    2404 command_runner.go:130] ! I0318 12:57:10.103607       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.211108    2404 command_runner.go:130] ! I0318 12:57:10.103704       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.211140    2404 command_runner.go:130] ! I0318 12:57:10.104106       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.211140    2404 command_runner.go:130] ! I0318 12:57:10.104144       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.211140    2404 command_runner.go:130] ! I0318 12:57:20.111225       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.211176    2404 command_runner.go:130] ! I0318 12:57:20.111346       1 main.go:227] handling current node
	I0318 13:11:05.211176    2404 command_runner.go:130] ! I0318 12:57:20.111360       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.212568    2404 command_runner.go:130] ! I0318 12:57:20.111367       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.212783    2404 command_runner.go:130] ! I0318 12:57:20.111695       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.213072    2404 command_runner.go:130] ! I0318 12:57:20.111775       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.213072    2404 command_runner.go:130] ! I0318 12:57:30.124283       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.213072    2404 command_runner.go:130] ! I0318 12:57:30.124477       1 main.go:227] handling current node
	I0318 13:11:05.213072    2404 command_runner.go:130] ! I0318 12:57:30.124495       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.213072    2404 command_runner.go:130] ! I0318 12:57:30.124505       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.213072    2404 command_runner.go:130] ! I0318 12:57:30.125279       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.213072    2404 command_runner.go:130] ! I0318 12:57:30.125393       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.213072    2404 command_runner.go:130] ! I0318 12:57:40.137523       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.213072    2404 command_runner.go:130] ! I0318 12:57:40.137766       1 main.go:227] handling current node
	I0318 13:11:05.213072    2404 command_runner.go:130] ! I0318 12:57:40.137807       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.213666    2404 command_runner.go:130] ! I0318 12:57:40.137833       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.213666    2404 command_runner.go:130] ! I0318 12:57:40.137998       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.213666    2404 command_runner.go:130] ! I0318 12:57:40.138087       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.213666    2404 command_runner.go:130] ! I0318 12:57:50.149548       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.213666    2404 command_runner.go:130] ! I0318 12:57:50.149697       1 main.go:227] handling current node
	I0318 13:11:05.213666    2404 command_runner.go:130] ! I0318 12:57:50.149712       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.213666    2404 command_runner.go:130] ! I0318 12:57:50.149720       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.213666    2404 command_runner.go:130] ! I0318 12:57:50.150251       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.213666    2404 command_runner.go:130] ! I0318 12:57:50.150344       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.213784    2404 command_runner.go:130] ! I0318 12:58:00.159094       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.213784    2404 command_runner.go:130] ! I0318 12:58:00.159284       1 main.go:227] handling current node
	I0318 13:11:05.213784    2404 command_runner.go:130] ! I0318 12:58:00.159340       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.213784    2404 command_runner.go:130] ! I0318 12:58:00.159700       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.213784    2404 command_runner.go:130] ! I0318 12:58:00.160303       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.213784    2404 command_runner.go:130] ! I0318 12:58:00.160346       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.213784    2404 command_runner.go:130] ! I0318 12:58:10.177603       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.213784    2404 command_runner.go:130] ! I0318 12:58:10.177780       1 main.go:227] handling current node
	I0318 13:11:05.213873    2404 command_runner.go:130] ! I0318 12:58:10.178122       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.213873    2404 command_runner.go:130] ! I0318 12:58:10.178166       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.213873    2404 command_runner.go:130] ! I0318 12:58:10.178455       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.213873    2404 command_runner.go:130] ! I0318 12:58:10.178497       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.213873    2404 command_runner.go:130] ! I0318 12:58:20.196110       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.213873    2404 command_runner.go:130] ! I0318 12:58:20.196144       1 main.go:227] handling current node
	I0318 13:11:05.213957    2404 command_runner.go:130] ! I0318 12:58:20.196236       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.213957    2404 command_runner.go:130] ! I0318 12:58:20.196542       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.213957    2404 command_runner.go:130] ! I0318 12:58:20.196774       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.213957    2404 command_runner.go:130] ! I0318 12:58:20.196867       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.213957    2404 command_runner.go:130] ! I0318 12:58:30.204485       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.213957    2404 command_runner.go:130] ! I0318 12:58:30.204515       1 main.go:227] handling current node
	I0318 13:11:05.213957    2404 command_runner.go:130] ! I0318 12:58:30.204528       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.213957    2404 command_runner.go:130] ! I0318 12:58:30.204556       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.213957    2404 command_runner.go:130] ! I0318 12:58:30.204856       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.213957    2404 command_runner.go:130] ! I0318 12:58:30.205022       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214066    2404 command_runner.go:130] ! I0318 12:58:40.221076       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214066    2404 command_runner.go:130] ! I0318 12:58:40.221184       1 main.go:227] handling current node
	I0318 13:11:05.214066    2404 command_runner.go:130] ! I0318 12:58:40.221201       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214066    2404 command_runner.go:130] ! I0318 12:58:40.221210       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214066    2404 command_runner.go:130] ! I0318 12:58:40.221741       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.214136    2404 command_runner.go:130] ! I0318 12:58:40.221769       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214136    2404 command_runner.go:130] ! I0318 12:58:50.229210       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214136    2404 command_runner.go:130] ! I0318 12:58:50.229302       1 main.go:227] handling current node
	I0318 13:11:05.214173    2404 command_runner.go:130] ! I0318 12:58:50.229317       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214198    2404 command_runner.go:130] ! I0318 12:58:50.229324       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214198    2404 command_runner.go:130] ! I0318 12:58:50.229703       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.214198    2404 command_runner.go:130] ! I0318 12:58:50.229807       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214198    2404 command_runner.go:130] ! I0318 12:59:00.244905       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214198    2404 command_runner.go:130] ! I0318 12:59:00.244992       1 main.go:227] handling current node
	I0318 13:11:05.214287    2404 command_runner.go:130] ! I0318 12:59:00.245007       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214287    2404 command_runner.go:130] ! I0318 12:59:00.245033       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214287    2404 command_runner.go:130] ! I0318 12:59:00.245480       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.214287    2404 command_runner.go:130] ! I0318 12:59:00.245600       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214287    2404 command_runner.go:130] ! I0318 12:59:10.253460       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214287    2404 command_runner.go:130] ! I0318 12:59:10.253563       1 main.go:227] handling current node
	I0318 13:11:05.214287    2404 command_runner.go:130] ! I0318 12:59:10.253579       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214363    2404 command_runner.go:130] ! I0318 12:59:10.253605       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214363    2404 command_runner.go:130] ! I0318 12:59:10.254199       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.214363    2404 command_runner.go:130] ! I0318 12:59:10.254310       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214363    2404 command_runner.go:130] ! I0318 12:59:20.270774       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214415    2404 command_runner.go:130] ! I0318 12:59:20.270870       1 main.go:227] handling current node
	I0318 13:11:05.214415    2404 command_runner.go:130] ! I0318 12:59:20.270886       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214415    2404 command_runner.go:130] ! I0318 12:59:20.270894       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214415    2404 command_runner.go:130] ! I0318 12:59:20.271275       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.214415    2404 command_runner.go:130] ! I0318 12:59:20.271367       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214516    2404 command_runner.go:130] ! I0318 12:59:30.281784       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214516    2404 command_runner.go:130] ! I0318 12:59:30.281809       1 main.go:227] handling current node
	I0318 13:11:05.214516    2404 command_runner.go:130] ! I0318 12:59:30.281819       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214516    2404 command_runner.go:130] ! I0318 12:59:30.281824       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214611    2404 command_runner.go:130] ! I0318 12:59:30.282361       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.214611    2404 command_runner.go:130] ! I0318 12:59:30.282392       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214611    2404 command_runner.go:130] ! I0318 12:59:40.291176       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214611    2404 command_runner.go:130] ! I0318 12:59:40.291304       1 main.go:227] handling current node
	I0318 13:11:05.214611    2404 command_runner.go:130] ! I0318 12:59:40.291321       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214611    2404 command_runner.go:130] ! I0318 12:59:40.291328       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214611    2404 command_runner.go:130] ! I0318 12:59:40.291827       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.214611    2404 command_runner.go:130] ! I0318 12:59:40.291857       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214713    2404 command_runner.go:130] ! I0318 12:59:50.303374       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214713    2404 command_runner.go:130] ! I0318 12:59:50.303454       1 main.go:227] handling current node
	I0318 13:11:05.214713    2404 command_runner.go:130] ! I0318 12:59:50.303468       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214713    2404 command_runner.go:130] ! I0318 12:59:50.303476       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214713    2404 command_runner.go:130] ! I0318 12:59:50.303974       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.214713    2404 command_runner.go:130] ! I0318 12:59:50.304002       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214713    2404 command_runner.go:130] ! I0318 13:00:00.311317       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214713    2404 command_runner.go:130] ! I0318 13:00:00.311423       1 main.go:227] handling current node
	I0318 13:11:05.214713    2404 command_runner.go:130] ! I0318 13:00:00.311441       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:00.311449       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:00.312039       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:00.312135       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:10.324823       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:10.324902       1 main.go:227] handling current node
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:10.324915       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:10.324926       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:10.325084       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:10.325108       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:20.338195       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:20.338297       1 main.go:227] handling current node
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:20.338312       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:20.338320       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:20.338525       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.215128    2404 command_runner.go:130] ! I0318 13:00:20.338601       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.215128    2404 command_runner.go:130] ! I0318 13:00:30.345095       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.215229    2404 command_runner.go:130] ! I0318 13:00:30.345184       1 main.go:227] handling current node
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:30.345198       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:30.345205       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:30.346074       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:30.346194       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:40.357007       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:40.357386       1 main.go:227] handling current node
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:40.357485       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:40.357513       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:40.357737       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:40.357766       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:50.372182       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:50.372221       1 main.go:227] handling current node
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:50.372235       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:50.372242       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:50.372608       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:50.372772       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:01:00.386990       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.215602    2404 command_runner.go:130] ! I0318 13:01:00.387036       1 main.go:227] handling current node
	I0318 13:11:05.215602    2404 command_runner.go:130] ! I0318 13:01:00.387050       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.215602    2404 command_runner.go:130] ! I0318 13:01:00.387058       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.215602    2404 command_runner.go:130] ! I0318 13:01:00.387182       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.215716    2404 command_runner.go:130] ! I0318 13:01:00.387191       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.215716    2404 command_runner.go:130] ! I0318 13:01:10.396889       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.215716    2404 command_runner.go:130] ! I0318 13:01:10.396930       1 main.go:227] handling current node
	I0318 13:11:05.215749    2404 command_runner.go:130] ! I0318 13:01:10.396942       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.215781    2404 command_runner.go:130] ! I0318 13:01:10.396948       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.215781    2404 command_runner.go:130] ! I0318 13:01:10.397250       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.215781    2404 command_runner.go:130] ! I0318 13:01:10.397343       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.215781    2404 command_runner.go:130] ! I0318 13:01:20.413272       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.215781    2404 command_runner.go:130] ! I0318 13:01:20.413371       1 main.go:227] handling current node
	I0318 13:11:05.215781    2404 command_runner.go:130] ! I0318 13:01:20.413386       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.215848    2404 command_runner.go:130] ! I0318 13:01:20.413395       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.215848    2404 command_runner.go:130] ! I0318 13:01:20.413968       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.215848    2404 command_runner.go:130] ! I0318 13:01:20.413999       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.215848    2404 command_runner.go:130] ! I0318 13:01:30.429160       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.215848    2404 command_runner.go:130] ! I0318 13:01:30.429478       1 main.go:227] handling current node
	I0318 13:11:05.215922    2404 command_runner.go:130] ! I0318 13:01:30.429549       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.215922    2404 command_runner.go:130] ! I0318 13:01:30.429678       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.215999    2404 command_runner.go:130] ! I0318 13:01:30.429960       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.215999    2404 command_runner.go:130] ! I0318 13:01:30.430034       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.215999    2404 command_runner.go:130] ! I0318 13:01:40.436733       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.215999    2404 command_runner.go:130] ! I0318 13:01:40.436839       1 main.go:227] handling current node
	I0318 13:11:05.215999    2404 command_runner.go:130] ! I0318 13:01:40.436901       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.215999    2404 command_runner.go:130] ! I0318 13:01:40.436930       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.215999    2404 command_runner.go:130] ! I0318 13:01:40.437399       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216090    2404 command_runner.go:130] ! I0318 13:01:40.437431       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216090    2404 command_runner.go:130] ! I0318 13:01:50.451622       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216121    2404 command_runner.go:130] ! I0318 13:01:50.451802       1 main.go:227] handling current node
	I0318 13:11:05.216121    2404 command_runner.go:130] ! I0318 13:01:50.451849       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:01:50.451860       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:01:50.452021       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:01:50.452171       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:00.460452       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:00.460548       1 main.go:227] handling current node
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:00.460563       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:00.460571       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:00.461181       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:00.461333       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:10.474274       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:10.474396       1 main.go:227] handling current node
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:10.474427       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:10.474436       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:10.475019       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:10.475159       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:20.489442       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:20.489616       1 main.go:227] handling current node
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:20.489699       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:20.489752       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:20.490046       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:20.490082       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:30.497474       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:30.497574       1 main.go:227] handling current node
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:30.497589       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:30.497597       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:30.498279       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:30.498361       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:40.512026       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:40.512345       1 main.go:227] handling current node
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:40.512385       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:40.512477       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:40.512786       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:40.512873       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:50.520110       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:50.520239       1 main.go:227] handling current node
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:50.520254       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:50.520263       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:50.520784       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:50.520861       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:03:00.531866       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:03:00.531958       1 main.go:227] handling current node
	I0318 13:11:05.216731    2404 command_runner.go:130] ! I0318 13:03:00.531972       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216731    2404 command_runner.go:130] ! I0318 13:03:00.531979       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216731    2404 command_runner.go:130] ! I0318 13:03:00.532211       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216731    2404 command_runner.go:130] ! I0318 13:03:00.532293       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216731    2404 command_runner.go:130] ! I0318 13:03:10.543869       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216731    2404 command_runner.go:130] ! I0318 13:03:10.543913       1 main.go:227] handling current node
	I0318 13:11:05.216731    2404 command_runner.go:130] ! I0318 13:03:10.543926       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216731    2404 command_runner.go:130] ! I0318 13:03:10.543933       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216823    2404 command_runner.go:130] ! I0318 13:03:10.544294       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216823    2404 command_runner.go:130] ! I0318 13:03:10.544430       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216823    2404 command_runner.go:130] ! I0318 13:03:20.558742       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216823    2404 command_runner.go:130] ! I0318 13:03:20.558782       1 main.go:227] handling current node
	I0318 13:11:05.216823    2404 command_runner.go:130] ! I0318 13:03:20.558795       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:20.558802       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:20.558992       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:20.559009       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:30.568771       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:30.568872       1 main.go:227] handling current node
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:30.568905       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:30.568996       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:30.569367       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:30.569450       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:40.587554       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:40.587674       1 main.go:227] handling current node
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:40.588337       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:40.588356       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:40.588758       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:40.588836       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:50.596331       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:50.596438       1 main.go:227] handling current node
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:50.596453       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:50.596462       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:50.596942       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:50.597079       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:00.611242       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:00.611383       1 main.go:227] handling current node
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:00.611397       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:00.611405       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:00.611541       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:00.611572       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:10.624814       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:10.624904       1 main.go:227] handling current node
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:10.624920       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:10.624927       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:10.625504       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:10.625547       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:20.640319       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:20.640364       1 main.go:227] handling current node
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:20.640379       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:20.640386       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:20.640865       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:20.640901       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:30.648021       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:30.648134       1 main.go:227] handling current node
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:30.648148       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:30.648156       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:30.648313       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:30.648344       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:40.663577       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:40.663749       1 main.go:227] handling current node
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:40.663765       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:40.663774       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:40.663896       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:40.663929       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:50.669717       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.217467    2404 command_runner.go:130] ! I0318 13:04:50.669791       1 main.go:227] handling current node
	I0318 13:11:05.217467    2404 command_runner.go:130] ! I0318 13:04:50.669805       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.217467    2404 command_runner.go:130] ! I0318 13:04:50.669812       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.217467    2404 command_runner.go:130] ! I0318 13:04:50.670128       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.217467    2404 command_runner.go:130] ! I0318 13:04:50.670230       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.217467    2404 command_runner.go:130] ! I0318 13:05:00.686596       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.217467    2404 command_runner.go:130] ! I0318 13:05:00.686809       1 main.go:227] handling current node
	I0318 13:11:05.217467    2404 command_runner.go:130] ! I0318 13:05:00.686942       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.217580    2404 command_runner.go:130] ! I0318 13:05:00.687116       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:00.687370       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:00.687441       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:10.704297       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:10.704404       1 main.go:227] handling current node
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:10.704426       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:10.704555       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:10.704810       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:10.704878       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:20.722958       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:20.723127       1 main.go:227] handling current node
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:20.723145       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:20.723159       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:30.731764       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:30.731841       1 main.go:227] handling current node
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:30.731854       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:30.731861       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:30.732029       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:30.732163       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:30.732544       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.30.137.140 Flags: [] Table: 0} 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:40.739849       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:40.739939       1 main.go:227] handling current node
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:40.739953       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:40.739960       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:40.740081       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:40.740151       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:50.748036       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:50.748465       1 main.go:227] handling current node
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:50.748942       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:50.749055       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:50.749287       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:50.749413       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:00.757350       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:00.757434       1 main.go:227] handling current node
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:00.757452       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:00.757460       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:00.757853       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:00.758194       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:10.766768       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:10.766886       1 main.go:227] handling current node
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:10.766901       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:10.766910       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:10.767143       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:10.767175       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:20.773530       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:20.773656       1 main.go:227] handling current node
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:20.773729       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:20.773741       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.218212    2404 command_runner.go:130] ! I0318 13:06:20.774155       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.218212    2404 command_runner.go:130] ! I0318 13:06:20.774478       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.218212    2404 command_runner.go:130] ! I0318 13:06:30.792219       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.218212    2404 command_runner.go:130] ! I0318 13:06:30.792349       1 main.go:227] handling current node
	I0318 13:11:05.218274    2404 command_runner.go:130] ! I0318 13:06:30.792364       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.218274    2404 command_runner.go:130] ! I0318 13:06:30.792373       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.218274    2404 command_runner.go:130] ! I0318 13:06:30.792864       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.218274    2404 command_runner.go:130] ! I0318 13:06:30.792901       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.218274    2404 command_runner.go:130] ! I0318 13:06:40.809219       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.218274    2404 command_runner.go:130] ! I0318 13:06:40.809451       1 main.go:227] handling current node
	I0318 13:11:05.218374    2404 command_runner.go:130] ! I0318 13:06:40.809484       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.218374    2404 command_runner.go:130] ! I0318 13:06:40.809508       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.218450    2404 command_runner.go:130] ! I0318 13:06:40.809841       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:06:40.810075       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:06:50.822556       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:06:50.822612       1 main.go:227] handling current node
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:06:50.822667       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:06:50.822680       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:06:50.822925       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:06:50.823171       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:00.837923       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:00.838008       1 main.go:227] handling current node
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:00.838022       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:00.838030       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:00.838429       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:00.838666       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:10.854207       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:10.854411       1 main.go:227] handling current node
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:10.854444       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:10.854469       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:10.854879       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:10.855094       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:20.861534       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:20.861671       1 main.go:227] handling current node
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:20.861685       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:20.861692       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:20.861818       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:20.861845       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.234629    2404 logs.go:123] Gathering logs for container status ...
	I0318 13:11:05.234629    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:11:05.334578    2404 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0318 13:11:05.334636    2404 command_runner.go:130] > c5d2074be239f       8c811b4aec35f                                                                                         12 seconds ago       Running             busybox                   1                   e20878b8092c2       busybox-5b5d89c9d6-c2997
	I0318 13:11:05.334636    2404 command_runner.go:130] > 3c3bc988c74cd       ead0a4a53df89                                                                                         12 seconds ago       Running             coredns                   1                   97583cc14f115       coredns-5dd5756b68-456tm
	I0318 13:11:05.334731    2404 command_runner.go:130] > eadcf41dad509       6e38f40d628db                                                                                         30 seconds ago       Running             storage-provisioner       2                   41035eff3b7db       storage-provisioner
	I0318 13:11:05.334765    2404 command_runner.go:130] > c8e5ec25e910e       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   86d74dec812cf       kindnet-hhsxh
	I0318 13:11:05.334765    2404 command_runner.go:130] > 46c0cf90d385f       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   41035eff3b7db       storage-provisioner
	I0318 13:11:05.334765    2404 command_runner.go:130] > 163ccabc3882a       83f6cc407eed8                                                                                         About a minute ago   Running             kube-proxy                1                   a9f21749669fe       kube-proxy-mc5tv
	I0318 13:11:05.334821    2404 command_runner.go:130] > 5f0887d1e6913       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      0                   354f3c44a34fc       etcd-multinode-894400
	I0318 13:11:05.334821    2404 command_runner.go:130] > 66ee8be9fada7       e3db313c6dbc0                                                                                         About a minute ago   Running             kube-scheduler            1                   6fb3325d3c100       kube-scheduler-multinode-894400
	I0318 13:11:05.334873    2404 command_runner.go:130] > fc4430c7fa204       7fe0e6f37db33                                                                                         About a minute ago   Running             kube-apiserver            0                   bc7236a19957e       kube-apiserver-multinode-894400
	I0318 13:11:05.334873    2404 command_runner.go:130] > 4ad6784a187d6       d058aa5ab969c                                                                                         About a minute ago   Running             kube-controller-manager   1                   066206d4c52cb       kube-controller-manager-multinode-894400
	I0318 13:11:05.334873    2404 command_runner.go:130] > dd031b5cb1e85       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   a23c1189be7c3       busybox-5b5d89c9d6-c2997
	I0318 13:11:05.334873    2404 command_runner.go:130] > 693a64f7472fd       ead0a4a53df89                                                                                         23 minutes ago       Exited              coredns                   0                   d001e299e996b       coredns-5dd5756b68-456tm
	I0318 13:11:05.334873    2404 command_runner.go:130] > c4d7018ad23a7       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              23 minutes ago       Exited              kindnet-cni               0                   a47b1fb60692c       kindnet-hhsxh
	I0318 13:11:05.334979    2404 command_runner.go:130] > 9335855aab63d       83f6cc407eed8                                                                                         23 minutes ago       Exited              kube-proxy                0                   60e9cd749c8f6       kube-proxy-mc5tv
	I0318 13:11:05.334979    2404 command_runner.go:130] > e4d42739ce0e9       e3db313c6dbc0                                                                                         23 minutes ago       Exited              kube-scheduler            0                   82710777e700c       kube-scheduler-multinode-894400
	I0318 13:11:05.335046    2404 command_runner.go:130] > 7aa5cf4ec378e       d058aa5ab969c                                                                                         23 minutes ago       Exited              kube-controller-manager   0                   5485f509825d9       kube-controller-manager-multinode-894400
	I0318 13:11:07.839606    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods
	I0318 13:11:07.839606    2404 round_trippers.go:469] Request Headers:
	I0318 13:11:07.839606    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:11:07.839606    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:11:07.845393    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:11:07.845510    2404 round_trippers.go:577] Response Headers:
	I0318 13:11:07.845510    2404 round_trippers.go:580]     Audit-Id: d0e9f04a-9114-4987-87c5-0da78416c885
	I0318 13:11:07.845510    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:11:07.845510    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:11:07.845510    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:11:07.845510    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:11:07.845510    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:11:07 GMT
	I0318 13:11:07.847055    2404 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1931"},"items":[{"metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1918","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83055 chars]
	I0318 13:11:07.850800    2404 system_pods.go:59] 12 kube-system pods found
	I0318 13:11:07.850800    2404 system_pods.go:61] "coredns-5dd5756b68-456tm" [1a018c55-846b-4dc2-992c-dc8fd82a6c67] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "etcd-multinode-894400" [d4c040b9-a604-4a0d-80ee-7436541af60c] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "kindnet-hhsxh" [0161d239-2d85-4246-b2fa-6c7374f2ecd6] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "kindnet-k5lpg" [c5e4099b-0611-4ebd-a7a5-ecdbeb168c5b] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "kindnet-zv9tv" [c4d70517-d7fb-4344-b2a4-20e40c13ab53] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "kube-apiserver-multinode-894400" [46152b8e-0bda-427e-a1ad-c79506b56763] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "kube-controller-manager-multinode-894400" [4ad5fc15-53ba-4ebb-9a63-b8572cd9c834] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "kube-proxy-745w9" [d385fe06-f516-440d-b9ed-37c2d4a81050] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "kube-proxy-8bdmn" [5c266b8a-9665-4365-93c6-2b5f1699d3ef] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "kube-proxy-mc5tv" [0afe25f8-cbd6-412b-8698-7b547d1d49ca] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "kube-scheduler-multinode-894400" [f47703ce-5a82-466e-ac8e-ef6b8cc07e6c] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "storage-provisioner" [219bafbc-d807-44cf-9927-e4957f36ad70] Running
	I0318 13:11:07.850800    2404 system_pods.go:74] duration metric: took 3.7252527s to wait for pod list to return data ...
	I0318 13:11:07.850800    2404 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:11:07.850991    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/default/serviceaccounts
	I0318 13:11:07.851068    2404 round_trippers.go:469] Request Headers:
	I0318 13:11:07.851068    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:11:07.851068    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:11:07.857488    2404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:11:07.857488    2404 round_trippers.go:577] Response Headers:
	I0318 13:11:07.857488    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:11:07.857488    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:11:07.857488    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:11:07.857488    2404 round_trippers.go:580]     Content-Length: 262
	I0318 13:11:07.857488    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:11:07 GMT
	I0318 13:11:07.857488    2404 round_trippers.go:580]     Audit-Id: b40916a2-07b4-4244-b553-de14255fe242
	I0318 13:11:07.857488    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:11:07.857488    2404 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1931"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"17315183-b28f-4dc0-9fbf-c6e55ed5b7f0","resourceVersion":"330","creationTimestamp":"2024-03-18T12:47:41Z"}}]}
	I0318 13:11:07.857488    2404 default_sa.go:45] found service account: "default"
	I0318 13:11:07.857488    2404 default_sa.go:55] duration metric: took 6.6882ms for default service account to be created ...
	I0318 13:11:07.857488    2404 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:11:07.858086    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods
	I0318 13:11:07.858086    2404 round_trippers.go:469] Request Headers:
	I0318 13:11:07.858086    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:11:07.858086    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:11:07.863358    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:11:07.863825    2404 round_trippers.go:577] Response Headers:
	I0318 13:11:07.863825    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:11:07 GMT
	I0318 13:11:07.863825    2404 round_trippers.go:580]     Audit-Id: c094dc2d-1736-42a9-99fa-0e52616eb725
	I0318 13:11:07.863825    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:11:07.863825    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:11:07.863825    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:11:07.863825    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:11:07.865577    2404 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1931"},"items":[{"metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1918","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83055 chars]
	I0318 13:11:07.868850    2404 system_pods.go:86] 12 kube-system pods found
	I0318 13:11:07.868872    2404 system_pods.go:89] "coredns-5dd5756b68-456tm" [1a018c55-846b-4dc2-992c-dc8fd82a6c67] Running
	I0318 13:11:07.868872    2404 system_pods.go:89] "etcd-multinode-894400" [d4c040b9-a604-4a0d-80ee-7436541af60c] Running
	I0318 13:11:07.868872    2404 system_pods.go:89] "kindnet-hhsxh" [0161d239-2d85-4246-b2fa-6c7374f2ecd6] Running
	I0318 13:11:07.868872    2404 system_pods.go:89] "kindnet-k5lpg" [c5e4099b-0611-4ebd-a7a5-ecdbeb168c5b] Running
	I0318 13:11:07.868872    2404 system_pods.go:89] "kindnet-zv9tv" [c4d70517-d7fb-4344-b2a4-20e40c13ab53] Running
	I0318 13:11:07.868872    2404 system_pods.go:89] "kube-apiserver-multinode-894400" [46152b8e-0bda-427e-a1ad-c79506b56763] Running
	I0318 13:11:07.868918    2404 system_pods.go:89] "kube-controller-manager-multinode-894400" [4ad5fc15-53ba-4ebb-9a63-b8572cd9c834] Running
	I0318 13:11:07.868918    2404 system_pods.go:89] "kube-proxy-745w9" [d385fe06-f516-440d-b9ed-37c2d4a81050] Running
	I0318 13:11:07.868918    2404 system_pods.go:89] "kube-proxy-8bdmn" [5c266b8a-9665-4365-93c6-2b5f1699d3ef] Running
	I0318 13:11:07.868918    2404 system_pods.go:89] "kube-proxy-mc5tv" [0afe25f8-cbd6-412b-8698-7b547d1d49ca] Running
	I0318 13:11:07.868918    2404 system_pods.go:89] "kube-scheduler-multinode-894400" [f47703ce-5a82-466e-ac8e-ef6b8cc07e6c] Running
	I0318 13:11:07.868918    2404 system_pods.go:89] "storage-provisioner" [219bafbc-d807-44cf-9927-e4957f36ad70] Running
	I0318 13:11:07.869050    2404 system_pods.go:126] duration metric: took 11.4975ms to wait for k8s-apps to be running ...
	I0318 13:11:07.869050    2404 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:11:07.881488    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:11:07.904911    2404 system_svc.go:56] duration metric: took 35.587ms WaitForService to wait for kubelet
	I0318 13:11:07.904911    2404 kubeadm.go:576] duration metric: took 1m14.2461794s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:11:07.904975    2404 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:11:07.905063    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes
	I0318 13:11:07.905063    2404 round_trippers.go:469] Request Headers:
	I0318 13:11:07.905120    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:11:07.905120    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:11:07.908249    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:11:07.908249    2404 round_trippers.go:577] Response Headers:
	I0318 13:11:07.908249    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:11:07.908249    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:11:07.908249    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:11:07.908249    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:11:07 GMT
	I0318 13:11:07.908249    2404 round_trippers.go:580]     Audit-Id: 5d546d88-7103-4952-a0ed-39d5975946b7
	I0318 13:11:07.908249    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:11:07.909398    2404 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1931"},"items":[{"metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16258 chars]
	I0318 13:11:07.910360    2404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:11:07.910360    2404 node_conditions.go:123] node cpu capacity is 2
	I0318 13:11:07.910430    2404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:11:07.910430    2404 node_conditions.go:123] node cpu capacity is 2
	I0318 13:11:07.910430    2404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:11:07.910430    2404 node_conditions.go:123] node cpu capacity is 2
	I0318 13:11:07.910430    2404 node_conditions.go:105] duration metric: took 5.455ms to run NodePressure ...
	I0318 13:11:07.910430    2404 start.go:240] waiting for startup goroutines ...
	I0318 13:11:07.910430    2404 start.go:245] waiting for cluster config update ...
	I0318 13:11:07.910491    2404 start.go:254] writing updated cluster config ...
	I0318 13:11:07.915142    2404 out.go:177] 
	I0318 13:11:07.918608    2404 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:11:07.925512    2404 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:11:07.925512    2404 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\config.json ...
	I0318 13:11:07.931516    2404 out.go:177] * Starting "multinode-894400-m02" worker node in "multinode-894400" cluster
	I0318 13:11:07.934610    2404 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:11:07.934610    2404 cache.go:56] Caching tarball of preloaded images
	I0318 13:11:07.934610    2404 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 13:11:07.934610    2404 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:11:07.934610    2404 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\config.json ...
	I0318 13:11:07.937263    2404 start.go:360] acquireMachinesLock for multinode-894400-m02: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:11:07.937840    2404 start.go:364] duration metric: took 577.7µs to acquireMachinesLock for "multinode-894400-m02"
	I0318 13:11:07.937999    2404 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:11:07.938044    2404 fix.go:54] fixHost starting: m02
	I0318 13:11:07.938202    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:11:09.994288    2404 main.go:141] libmachine: [stdout =====>] : Off
	
	I0318 13:11:09.994384    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:09.994384    2404 fix.go:112] recreateIfNeeded on multinode-894400-m02: state=Stopped err=<nil>
	W0318 13:11:09.994384    2404 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:11:09.997399    2404 out.go:177] * Restarting existing hyperv VM for "multinode-894400-m02" ...
	I0318 13:11:10.002328    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-894400-m02
	I0318 13:11:12.983495    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:11:12.983495    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:12.983723    2404 main.go:141] libmachine: Waiting for host to start...
	I0318 13:11:12.983723    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:11:15.174567    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:11:15.174567    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:15.174647    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:11:17.575407    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:11:17.576434    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:18.590965    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:11:20.787759    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:11:20.787759    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:20.787759    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:11:23.262491    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:11:23.262491    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:24.268327    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:11:26.368298    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:11:26.369394    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:26.369394    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:11:28.866699    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:11:28.866699    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:29.869672    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:11:31.986818    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:11:31.987538    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:31.987538    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:11:34.398016    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:11:34.398977    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:35.407415    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:11:37.472632    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:11:37.472632    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:37.472736    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:11:39.878741    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:11:39.879447    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:39.883040    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:11:41.925033    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:11:41.925033    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:41.925756    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:11:44.357885    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:11:44.357932    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:44.358091    2404 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\config.json ...
	I0318 13:11:44.360661    2404 machine.go:94] provisionDockerMachine start ...
	I0318 13:11:44.360661    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:11:46.391305    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:11:46.391305    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:46.392143    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:11:48.854189    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:11:48.855236    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:48.861171    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:11:48.861812    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.185 22 <nil> <nil>}
	I0318 13:11:48.861812    2404 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:11:48.982669    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:11:48.982748    2404 buildroot.go:166] provisioning hostname "multinode-894400-m02"
	I0318 13:11:48.982748    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:11:50.983773    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:11:50.984659    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:50.984659    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:11:53.385199    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:11:53.385199    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:53.391565    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:11:53.391702    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.185 22 <nil> <nil>}
	I0318 13:11:53.391702    2404 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-894400-m02 && echo "multinode-894400-m02" | sudo tee /etc/hostname
	I0318 13:11:53.537642    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-894400-m02
	
	I0318 13:11:53.537642    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:11:55.549653    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:11:55.550118    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:55.550196    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:11:57.999231    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:11:58.000224    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:58.007457    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:11:58.009541    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.185 22 <nil> <nil>}
	I0318 13:11:58.009541    2404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-894400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-894400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-894400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:11:58.159423    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:11:58.159423    2404 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0318 13:11:58.159423    2404 buildroot.go:174] setting up certificates
	I0318 13:11:58.159423    2404 provision.go:84] configureAuth start
	I0318 13:11:58.159423    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:00.199540    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:00.199540    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:00.199540    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:02.652920    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:02.652920    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:02.653115    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:04.734456    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:04.734456    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:04.734456    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:07.180833    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:07.180833    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:07.181808    2404 provision.go:143] copyHostCerts
	I0318 13:12:07.181961    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0318 13:12:07.182303    2404 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0318 13:12:07.182303    2404 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0318 13:12:07.182726    2404 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 13:12:07.183786    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0318 13:12:07.183852    2404 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0318 13:12:07.183852    2404 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0318 13:12:07.183852    2404 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 13:12:07.185196    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0318 13:12:07.185534    2404 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0318 13:12:07.185615    2404 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0318 13:12:07.185841    2404 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 13:12:07.186897    2404 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-894400-m02 san=[127.0.0.1 172.30.130.185 localhost minikube multinode-894400-m02]
	I0318 13:12:07.408633    2404 provision.go:177] copyRemoteCerts
	I0318 13:12:07.422089    2404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:12:07.422166    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:09.529122    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:09.529158    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:09.529227    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:12.079818    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:12.079818    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:12.080951    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.185 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\id_rsa Username:docker}
	I0318 13:12:12.189055    2404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7668847s)
	I0318 13:12:12.189055    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 13:12:12.189055    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:12:12.229946    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 13:12:12.230004    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0318 13:12:12.272060    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 13:12:12.272060    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 13:12:12.313905    2404 provision.go:87] duration metric: took 14.1543784s to configureAuth
	I0318 13:12:12.313905    2404 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:12:12.314732    2404 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:12:12.314928    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:14.336233    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:14.336813    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:14.336813    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:16.732010    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:16.732010    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:16.738261    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:16.738870    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.185 22 <nil> <nil>}
	I0318 13:12:16.738870    2404 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 13:12:16.868485    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 13:12:16.868544    2404 buildroot.go:70] root file system type: tmpfs
	I0318 13:12:16.868784    2404 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 13:12:16.868784    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:18.909963    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:18.909963    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:18.910779    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:21.361089    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:21.361594    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:21.367439    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:21.367817    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.185 22 <nil> <nil>}
	I0318 13:12:21.368023    2404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.30.130.156"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 13:12:21.525431    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.30.130.156
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 13:12:21.525577    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:23.541908    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:23.542838    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:23.542838    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:25.954793    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:25.955133    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:25.960269    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:25.961002    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.185 22 <nil> <nil>}
	I0318 13:12:25.961002    2404 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 13:12:28.191547    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 13:12:28.191547    2404 machine.go:97] duration metric: took 43.8305662s to provisionDockerMachine
	I0318 13:12:28.191547    2404 start.go:293] postStartSetup for "multinode-894400-m02" (driver="hyperv")
	I0318 13:12:28.191547    2404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:12:28.206808    2404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:12:28.206808    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:30.253637    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:30.253773    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:30.253911    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:32.698451    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:32.699512    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:32.699680    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.185 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\id_rsa Username:docker}
	I0318 13:12:32.797860    2404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5909651s)
	I0318 13:12:32.808275    2404 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:12:32.815517    2404 command_runner.go:130] > NAME=Buildroot
	I0318 13:12:32.815517    2404 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0318 13:12:32.815517    2404 command_runner.go:130] > ID=buildroot
	I0318 13:12:32.815517    2404 command_runner.go:130] > VERSION_ID=2023.02.9
	I0318 13:12:32.815517    2404 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0318 13:12:32.815517    2404 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:12:32.815517    2404 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0318 13:12:32.815517    2404 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0318 13:12:32.817088    2404 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> 134242.pem in /etc/ssl/certs
	I0318 13:12:32.817161    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /etc/ssl/certs/134242.pem
	I0318 13:12:32.829332    2404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:12:32.846367    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /etc/ssl/certs/134242.pem (1708 bytes)
	I0318 13:12:32.888554    2404 start.go:296] duration metric: took 4.6969725s for postStartSetup
	I0318 13:12:32.888554    2404 fix.go:56] duration metric: took 1m24.9498895s for fixHost
	I0318 13:12:32.888554    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:34.957144    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:34.957338    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:34.957338    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:37.357640    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:37.357640    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:37.362966    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:37.363611    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.185 22 <nil> <nil>}
	I0318 13:12:37.363611    2404 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 13:12:37.491789    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710767557.485432172
	
	I0318 13:12:37.491863    2404 fix.go:216] guest clock: 1710767557.485432172
	I0318 13:12:37.491863    2404 fix.go:229] Guest: 2024-03-18 13:12:37.485432172 +0000 UTC Remote: 2024-03-18 13:12:32.8885546 +0000 UTC m=+287.993265201 (delta=4.596877572s)
	I0318 13:12:37.491954    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:39.563095    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:39.563908    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:39.564112    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:41.983974    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:41.983974    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:41.990077    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:41.990077    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.185 22 <nil> <nil>}
	I0318 13:12:41.990664    2404 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710767557
	I0318 13:12:42.127988    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 13:12:37 UTC 2024
	
	I0318 13:12:42.128107    2404 fix.go:236] clock set: Mon Mar 18 13:12:37 UTC 2024
	 (err=<nil>)
	I0318 13:12:42.128107    2404 start.go:83] releasing machines lock for "multinode-894400-m02", held for 1m34.1895247s
	I0318 13:12:42.128359    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:44.139865    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:44.139865    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:44.140535    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:46.529473    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:46.529721    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:46.532836    2404 out.go:177] * Found network options:
	I0318 13:12:46.535816    2404 out.go:177]   - NO_PROXY=172.30.130.156
	W0318 13:12:46.538163    2404 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 13:12:46.540959    2404 out.go:177]   - NO_PROXY=172.30.130.156
	W0318 13:12:46.543625    2404 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 13:12:46.544879    2404 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 13:12:46.547270    2404 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:12:46.547270    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:46.557261    2404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 13:12:46.557261    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:48.658111    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:48.658111    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:48.658111    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:48.658111    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:48.658111    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:48.658111    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:51.199167    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:51.199928    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:51.200019    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.185 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\id_rsa Username:docker}
	I0318 13:12:51.218781    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:51.218819    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:51.218819    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.185 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\id_rsa Username:docker}
	I0318 13:12:51.362037    2404 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0318 13:12:51.362037    2404 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8147309s)
	I0318 13:12:51.362037    2404 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0318 13:12:51.362037    2404 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8047402s)
	W0318 13:12:51.362037    2404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:12:51.374074    2404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:12:51.400152    2404 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0318 13:12:51.400548    2404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:12:51.400548    2404 start.go:494] detecting cgroup driver to use...
	I0318 13:12:51.400802    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:12:51.433233    2404 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0318 13:12:51.444222    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 13:12:51.474714    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 13:12:51.493971    2404 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 13:12:51.505462    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 13:12:51.535156    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 13:12:51.564076    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 13:12:51.593370    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 13:12:51.625124    2404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:12:51.656333    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 13:12:51.686821    2404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:12:51.703903    2404 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0318 13:12:51.715363    2404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:12:51.744976    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:12:51.925119    2404 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 13:12:51.954154    2404 start.go:494] detecting cgroup driver to use...
	I0318 13:12:51.965104    2404 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 13:12:51.988950    2404 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0318 13:12:51.988950    2404 command_runner.go:130] > [Unit]
	I0318 13:12:51.988950    2404 command_runner.go:130] > Description=Docker Application Container Engine
	I0318 13:12:51.988950    2404 command_runner.go:130] > Documentation=https://docs.docker.com
	I0318 13:12:51.988950    2404 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0318 13:12:51.988950    2404 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0318 13:12:51.988950    2404 command_runner.go:130] > StartLimitBurst=3
	I0318 13:12:51.988950    2404 command_runner.go:130] > StartLimitIntervalSec=60
	I0318 13:12:51.988950    2404 command_runner.go:130] > [Service]
	I0318 13:12:51.988950    2404 command_runner.go:130] > Type=notify
	I0318 13:12:51.988950    2404 command_runner.go:130] > Restart=on-failure
	I0318 13:12:51.988950    2404 command_runner.go:130] > Environment=NO_PROXY=172.30.130.156
	I0318 13:12:51.988950    2404 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0318 13:12:51.988950    2404 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0318 13:12:51.988950    2404 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0318 13:12:51.988950    2404 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0318 13:12:51.988950    2404 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0318 13:12:51.988950    2404 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0318 13:12:51.988950    2404 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0318 13:12:51.988950    2404 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0318 13:12:51.988950    2404 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0318 13:12:51.988950    2404 command_runner.go:130] > ExecStart=
	I0318 13:12:51.988950    2404 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0318 13:12:51.988950    2404 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0318 13:12:51.988950    2404 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0318 13:12:51.988950    2404 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0318 13:12:51.988950    2404 command_runner.go:130] > LimitNOFILE=infinity
	I0318 13:12:51.988950    2404 command_runner.go:130] > LimitNPROC=infinity
	I0318 13:12:51.989810    2404 command_runner.go:130] > LimitCORE=infinity
	I0318 13:12:51.989810    2404 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0318 13:12:51.989810    2404 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0318 13:12:51.989810    2404 command_runner.go:130] > TasksMax=infinity
	I0318 13:12:51.989810    2404 command_runner.go:130] > TimeoutStartSec=0
	I0318 13:12:51.989810    2404 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0318 13:12:51.989810    2404 command_runner.go:130] > Delegate=yes
	I0318 13:12:51.989810    2404 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0318 13:12:51.989810    2404 command_runner.go:130] > KillMode=process
	I0318 13:12:51.989810    2404 command_runner.go:130] > [Install]
	I0318 13:12:51.989810    2404 command_runner.go:130] > WantedBy=multi-user.target
	I0318 13:12:52.004160    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:12:52.038451    2404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:12:52.078410    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:12:52.117147    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 13:12:52.155960    2404 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 13:12:52.224473    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 13:12:52.246925    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:12:52.280923    2404 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0318 13:12:52.296152    2404 ssh_runner.go:195] Run: which cri-dockerd
	I0318 13:12:52.301950    2404 command_runner.go:130] > /usr/bin/cri-dockerd
	I0318 13:12:52.316847    2404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 13:12:52.336256    2404 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 13:12:52.378754    2404 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 13:12:52.577690    2404 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 13:12:52.751724    2404 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 13:12:52.751883    2404 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 13:12:52.796897    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:12:52.993289    2404 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 13:12:55.546613    2404 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5527863s)
	I0318 13:12:55.557642    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 13:12:55.596056    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 13:12:55.627123    2404 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 13:12:55.814803    2404 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 13:12:56.000980    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:12:56.175890    2404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 13:12:56.214338    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 13:12:56.251700    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:12:56.427833    2404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 13:12:56.523350    2404 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 13:12:56.534195    2404 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 13:12:56.542185    2404 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0318 13:12:56.542185    2404 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0318 13:12:56.542185    2404 command_runner.go:130] > Device: 0,22	Inode: 846         Links: 1
	I0318 13:12:56.542185    2404 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0318 13:12:56.542185    2404 command_runner.go:130] > Access: 2024-03-18 13:12:56.431716269 +0000
	I0318 13:12:56.542185    2404 command_runner.go:130] > Modify: 2024-03-18 13:12:56.431716269 +0000
	I0318 13:12:56.542185    2404 command_runner.go:130] > Change: 2024-03-18 13:12:56.435716249 +0000
	I0318 13:12:56.542185    2404 command_runner.go:130] >  Birth: -
	I0318 13:12:56.542185    2404 start.go:562] Will wait 60s for crictl version
	I0318 13:12:56.552172    2404 ssh_runner.go:195] Run: which crictl
	I0318 13:12:56.557810    2404 command_runner.go:130] > /usr/bin/crictl
	I0318 13:12:56.569418    2404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:12:56.636628    2404 command_runner.go:130] > Version:  0.1.0
	I0318 13:12:56.636628    2404 command_runner.go:130] > RuntimeName:  docker
	I0318 13:12:56.636628    2404 command_runner.go:130] > RuntimeVersion:  25.0.4
	I0318 13:12:56.636628    2404 command_runner.go:130] > RuntimeApiVersion:  v1
	I0318 13:12:56.636628    2404 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 13:12:56.646742    2404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 13:12:56.677837    2404 command_runner.go:130] > 25.0.4
	I0318 13:12:56.685878    2404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 13:12:56.714384    2404 command_runner.go:130] > 25.0.4
	I0318 13:12:56.719399    2404 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 13:12:56.721414    2404 out.go:177]   - env NO_PROXY=172.30.130.156
	I0318 13:12:56.724427    2404 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 13:12:56.728385    2404 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 13:12:56.728385    2404 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 13:12:56.728385    2404 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 13:12:56.728385    2404 ip.go:207] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:93:df:b5 Flags:up|broadcast|multicast|running}
	I0318 13:12:56.731375    2404 ip.go:210] interface addr: fe80::572b:6b1d:9130:e88b/64
	I0318 13:12:56.731375    2404 ip.go:210] interface addr: 172.30.128.1/20
	I0318 13:12:56.741373    2404 ssh_runner.go:195] Run: grep 172.30.128.1	host.minikube.internal$ /etc/hosts
	I0318 13:12:56.747824    2404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:12:56.767982    2404 mustload.go:65] Loading cluster: multinode-894400
	I0318 13:12:56.768521    2404 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:12:56.769261    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:12:58.828481    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:58.829214    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:58.829214    2404 host.go:66] Checking if "multinode-894400" exists ...
	I0318 13:12:58.829874    2404 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400 for IP: 172.30.130.185
	I0318 13:12:58.829874    2404 certs.go:194] generating shared ca certs ...
	I0318 13:12:58.829874    2404 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:12:58.830607    2404 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0318 13:12:58.830607    2404 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0318 13:12:58.831177    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 13:12:58.831484    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 13:12:58.831484    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 13:12:58.831484    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 13:12:58.832235    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem (1338 bytes)
	W0318 13:12:58.832235    2404 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424_empty.pem, impossibly tiny 0 bytes
	I0318 13:12:58.832235    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0318 13:12:58.832966    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 13:12:58.833273    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 13:12:58.833573    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 13:12:58.834032    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem (1708 bytes)
	I0318 13:12:58.834310    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:12:58.834474    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem -> /usr/share/ca-certificates/13424.pem
	I0318 13:12:58.834674    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /usr/share/ca-certificates/134242.pem
	I0318 13:12:58.834929    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:12:58.880571    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0318 13:12:58.923341    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:12:58.964738    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:12:59.007898    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:12:59.049852    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem --> /usr/share/ca-certificates/13424.pem (1338 bytes)
	I0318 13:12:59.094314    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /usr/share/ca-certificates/134242.pem (1708 bytes)
	I0318 13:12:59.150670    2404 ssh_runner.go:195] Run: openssl version
	I0318 13:12:59.159202    2404 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0318 13:12:59.170544    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:12:59.206253    2404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:12:59.213194    2404 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 18 11:07 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:12:59.213194    2404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 11:07 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:12:59.224821    2404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:12:59.232869    2404 command_runner.go:130] > b5213941
	I0318 13:12:59.245647    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:12:59.274388    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13424.pem && ln -fs /usr/share/ca-certificates/13424.pem /etc/ssl/certs/13424.pem"
	I0318 13:12:59.303443    2404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13424.pem
	I0318 13:12:59.310133    2404 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 18 11:25 /usr/share/ca-certificates/13424.pem
	I0318 13:12:59.310225    2404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 11:25 /usr/share/ca-certificates/13424.pem
	I0318 13:12:59.320396    2404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13424.pem
	I0318 13:12:59.328062    2404 command_runner.go:130] > 51391683
	I0318 13:12:59.340345    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13424.pem /etc/ssl/certs/51391683.0"
	I0318 13:12:59.371208    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134242.pem && ln -fs /usr/share/ca-certificates/134242.pem /etc/ssl/certs/134242.pem"
	I0318 13:12:59.406284    2404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134242.pem
	I0318 13:12:59.411851    2404 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 18 11:25 /usr/share/ca-certificates/134242.pem
	I0318 13:12:59.412491    2404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 11:25 /usr/share/ca-certificates/134242.pem
	I0318 13:12:59.423279    2404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134242.pem
	I0318 13:12:59.431301    2404 command_runner.go:130] > 3ec20f2e
	I0318 13:12:59.442654    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134242.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:12:59.473181    2404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:12:59.478195    2404 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 13:12:59.479073    2404 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 13:12:59.479073    2404 kubeadm.go:928] updating node {m02 172.30.130.185 8443 v1.28.4 docker false true} ...
	I0318 13:12:59.479615    2404 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-894400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.130.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-894400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:12:59.490457    2404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:12:59.506642    2404 command_runner.go:130] > kubeadm
	I0318 13:12:59.506642    2404 command_runner.go:130] > kubectl
	I0318 13:12:59.506642    2404 command_runner.go:130] > kubelet
	I0318 13:12:59.506741    2404 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:12:59.518054    2404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0318 13:12:59.534962    2404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0318 13:12:59.565535    2404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:12:59.604548    2404 ssh_runner.go:195] Run: grep 172.30.130.156	control-plane.minikube.internal$ /etc/hosts
	I0318 13:12:59.610622    2404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.130.156	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:12:59.639384    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:12:59.826223    2404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:12:59.855586    2404 host.go:66] Checking if "multinode-894400" exists ...
	I0318 13:12:59.856348    2404 start.go:316] joinCluster: &{Name:multinode-894400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-894400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.130.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.30.130.185 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.30.137.140 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:12:59.856574    2404 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.30.130.185 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0318 13:12:59.856574    2404 host.go:66] Checking if "multinode-894400-m02" exists ...
	I0318 13:12:59.857189    2404 mustload.go:65] Loading cluster: multinode-894400
	I0318 13:12:59.857744    2404 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:12:59.858090    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:13:01.946469    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:13:01.946580    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:13:01.946580    2404 host.go:66] Checking if "multinode-894400" exists ...
	I0318 13:13:01.947273    2404 api_server.go:166] Checking apiserver status ...
	I0318 13:13:01.958213    2404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:13:01.958213    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:13:04.029455    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:13:04.029815    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:13:04.029815    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:13:06.500412    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:13:06.500412    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:13:06.500412    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\id_rsa Username:docker}
	I0318 13:13:06.612960    2404 command_runner.go:130] > 1904
	I0318 13:13:06.613322    2404 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.6550744s)
	I0318 13:13:06.624008    2404 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1904/cgroup
	W0318 13:13:06.640979    2404 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1904/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:13:06.652797    2404 ssh_runner.go:195] Run: ls
	I0318 13:13:06.659353    2404 api_server.go:253] Checking apiserver healthz at https://172.30.130.156:8443/healthz ...
	I0318 13:13:06.668845    2404 api_server.go:279] https://172.30.130.156:8443/healthz returned 200:
	ok
	I0318 13:13:06.679968    2404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-894400-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0318 13:13:06.804006    2404 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-k5lpg, kube-system/kube-proxy-8bdmn
	I0318 13:13:09.844606    2404 command_runner.go:130] > node/multinode-894400-m02 cordoned
	I0318 13:13:09.844687    2404 command_runner.go:130] > pod "busybox-5b5d89c9d6-8btgf" has DeletionTimestamp older than 1 seconds, skipping
	I0318 13:13:09.844687    2404 command_runner.go:130] > node/multinode-894400-m02 drained
	I0318 13:13:09.844687    2404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-894400-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.1646965s)
	I0318 13:13:09.844687    2404 node.go:128] successfully drained node "multinode-894400-m02"
	I0318 13:13:09.844687    2404 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0318 13:13:09.844687    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:13:11.932917    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:13:11.933133    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:13:11.933309    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:13:14.426217    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:13:14.426931    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:13:14.427222    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.185 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\id_rsa Username:docker}
	I0318 13:13:14.828886    2404 command_runner.go:130] ! W0318 13:13:14.810383    1551 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0318 13:13:15.423703    2404 command_runner.go:130] ! W0318 13:13:15.404141    1551 cleanupnode.go:99] [reset] Failed to remove containers: failed to stop running pod d31120bfd5cc1a38da24c03574ff5be355cc2afa037bec7fa98bc10c7a2fdb1f: output: E0318 13:13:15.079506    1622 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-5b5d89c9d6-8btgf_default\" network: cni config uninitialized" podSandboxID="d31120bfd5cc1a38da24c03574ff5be355cc2afa037bec7fa98bc10c7a2fdb1f"
	I0318 13:13:15.423703    2404 command_runner.go:130] ! time="2024-03-18T13:13:15Z" level=fatal msg="stopping the pod sandbox \"d31120bfd5cc1a38da24c03574ff5be355cc2afa037bec7fa98bc10c7a2fdb1f\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-5b5d89c9d6-8btgf_default\" network: cni config uninitialized"
	I0318 13:13:15.423703    2404 command_runner.go:130] ! : exit status 1
	I0318 13:13:15.446299    2404 command_runner.go:130] > [preflight] Running pre-flight checks
	I0318 13:13:15.446299    2404 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0318 13:13:15.446299    2404 command_runner.go:130] > [reset] Stopping the kubelet service
	I0318 13:13:15.446299    2404 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0318 13:13:15.446299    2404 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0318 13:13:15.446299    2404 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0318 13:13:15.446299    2404 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0318 13:13:15.446299    2404 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0318 13:13:15.446299    2404 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0318 13:13:15.446299    2404 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0318 13:13:15.446299    2404 command_runner.go:130] > to reset your system's IPVS tables.
	I0318 13:13:15.446299    2404 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0318 13:13:15.446299    2404 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0318 13:13:15.446299    2404 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (5.6015704s)
	I0318 13:13:15.446299    2404 node.go:155] successfully reset node "multinode-894400-m02"
	I0318 13:13:15.447860    2404 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 13:13:15.448598    2404 kapi.go:59] client config for multinode-894400: &rest.Config{Host:"https://172.30.130.156:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-894400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-894400\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e8b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 13:13:15.449982    2404 cert_rotation.go:137] Starting client certificate rotation controller
	I0318 13:13:15.449982    2404 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0318 13:13:15.449982    2404 round_trippers.go:463] DELETE https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:15.449982    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:15.449982    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:15.449982    2404 round_trippers.go:473]     Content-Type: application/json
	I0318 13:13:15.449982    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:15.466653    2404 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0318 13:13:15.466653    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:15.466653    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:15.466653    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:15.466653    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:15.466653    2404 round_trippers.go:580]     Content-Length: 171
	I0318 13:13:15.466653    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:15 GMT
	I0318 13:13:15.466653    2404 round_trippers.go:580]     Audit-Id: 0dfa8ab5-c803-46df-986b-ecc0de7665e3
	I0318 13:13:15.466653    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:15.466653    2404 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-894400-m02","kind":"nodes","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c"}}
	I0318 13:13:15.467476    2404 node.go:180] successfully deleted node "multinode-894400-m02"
	I0318 13:13:15.467476    2404 start.go:333] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.30.130.185 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0318 13:13:15.467476    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 13:13:15.467732    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:13:17.515460    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:13:17.516450    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:13:17.516564    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:13:19.936774    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:13:19.936774    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:13:19.937861    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\id_rsa Username:docker}
	I0318 13:13:20.132348    2404 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token h3emo0.3od1rtlfoqng84m0 --discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 
	I0318 13:13:20.132392    2404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.6647415s)
	I0318 13:13:20.132392    2404 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.30.130.185 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0318 13:13:20.132392    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token h3emo0.3od1rtlfoqng84m0 --discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-894400-m02"
	I0318 13:13:20.374892    2404 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:13:23.193670    2404 command_runner.go:130] > [preflight] Running pre-flight checks
	I0318 13:13:23.194487    2404 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0318 13:13:23.194537    2404 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0318 13:13:23.194537    2404 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:13:23.194537    2404 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:13:23.194537    2404 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0318 13:13:23.194613    2404 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0318 13:13:23.194613    2404 command_runner.go:130] > This node has joined the cluster:
	I0318 13:13:23.194613    2404 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0318 13:13:23.194613    2404 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0318 13:13:23.194613    2404 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0318 13:13:23.194697    2404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token h3emo0.3od1rtlfoqng84m0 --discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-894400-m02": (3.0622822s)
	I0318 13:13:23.194733    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 13:13:23.415982    2404 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0318 13:13:23.631094    2404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-894400-m02 minikube.k8s.io/updated_at=2024_03_18T13_13_23_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=multinode-894400 minikube.k8s.io/primary=false
	I0318 13:13:23.787652    2404 command_runner.go:130] > node/multinode-894400-m02 labeled
	I0318 13:13:23.790754    2404 start.go:318] duration metric: took 23.9342309s to joinCluster
	I0318 13:13:23.790754    2404 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.30.130.185 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0318 13:13:23.796901    2404 out.go:177] * Verifying Kubernetes components...
	I0318 13:13:23.791729    2404 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:13:23.811059    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:13:24.076966    2404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:13:24.106074    2404 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 13:13:24.107006    2404 kapi.go:59] client config for multinode-894400: &rest.Config{Host:"https://172.30.130.156:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-894400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-894400\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e8b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 13:13:24.107807    2404 node_ready.go:35] waiting up to 6m0s for node "multinode-894400-m02" to be "Ready" ...
	I0318 13:13:24.107985    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:24.107985    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:24.107985    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:24.107985    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:24.111791    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:24.111917    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:24.111917    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:24.111917    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:24.111917    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:24.111917    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:24 GMT
	I0318 13:13:24.111917    2404 round_trippers.go:580]     Audit-Id: bad236c7-32fe-4927-8933-37c6bd2bb3ea
	I0318 13:13:24.111917    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:24.112182    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2075","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3689 chars]
	I0318 13:13:24.609602    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:24.609602    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:24.609774    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:24.609774    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:24.613323    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:24.614345    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:24.614345    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:24.614406    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:24.614406    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:24.614455    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:24 GMT
	I0318 13:13:24.614455    2404 round_trippers.go:580]     Audit-Id: c2a8c207-329b-4fa3-b15c-c9cb136f3048
	I0318 13:13:24.614455    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:24.614839    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2075","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3689 chars]
	I0318 13:13:25.122131    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:25.122131    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:25.122131    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:25.122131    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:25.125348    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:25.126523    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:25.126523    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:25 GMT
	I0318 13:13:25.126523    2404 round_trippers.go:580]     Audit-Id: 3117168d-ce41-4647-8887-064e01f164f7
	I0318 13:13:25.126523    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:25.126523    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:25.126641    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:25.126641    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:25.126764    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2075","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3689 chars]
	I0318 13:13:25.621917    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:25.621917    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:25.621917    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:25.621917    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:25.625501    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:25.625501    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:25.625501    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:25.625501    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:25 GMT
	I0318 13:13:25.625501    2404 round_trippers.go:580]     Audit-Id: 0601b344-e90b-4873-b61f-8357cbe3824b
	I0318 13:13:25.625501    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:25.625501    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:25.625501    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:25.626456    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2075","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3689 chars]
	I0318 13:13:26.122925    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:26.122925    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:26.122925    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:26.122925    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:26.127852    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:26.127852    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:26.127852    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:26.127923    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:26.127923    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:26 GMT
	I0318 13:13:26.127923    2404 round_trippers.go:580]     Audit-Id: 44074f95-811a-4e0c-8124-ef0c0f696af2
	I0318 13:13:26.127923    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:26.127923    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:26.127923    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2075","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3689 chars]
	I0318 13:13:26.127923    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:26.610764    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:26.610764    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:26.610764    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:26.610764    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:26.617844    2404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 13:13:26.617844    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:26.617844    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:26.617844    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:26 GMT
	I0318 13:13:26.617844    2404 round_trippers.go:580]     Audit-Id: c2d68481-9ab5-4b4a-9c2a-11d1dff8736d
	I0318 13:13:26.617844    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:26.617844    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:26.617844    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:26.617844    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2075","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3689 chars]
	I0318 13:13:27.109206    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:27.109206    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:27.109206    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:27.109206    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:27.112801    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:27.112801    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:27.112801    2404 round_trippers.go:580]     Audit-Id: 5ae9fe3b-71ae-4a61-9303-eaa1d2a8a680
	I0318 13:13:27.113012    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:27.113012    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:27.113012    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:27.113012    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:27.113012    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:27 GMT
	I0318 13:13:27.113264    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:27.613475    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:27.613785    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:27.613785    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:27.613863    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:27.618291    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:27.618291    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:27.618291    2404 round_trippers.go:580]     Audit-Id: 70ba5047-e870-4a57-b71e-b01cb28d82bd
	I0318 13:13:27.618291    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:27.618291    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:27.618291    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:27.618291    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:27.618291    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:27 GMT
	I0318 13:13:27.618291    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:28.115677    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:28.115811    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:28.115811    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:28.115811    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:28.119547    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:28.120169    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:28.120169    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:28.120169    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:28 GMT
	I0318 13:13:28.120169    2404 round_trippers.go:580]     Audit-Id: 2b446069-c50e-4406-95f0-3498f32767e0
	I0318 13:13:28.120169    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:28.120169    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:28.120169    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:28.120169    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:28.616315    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:28.616315    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:28.616315    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:28.616315    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:28.621060    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:28.621060    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:28.621060    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:28.621060    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:28 GMT
	I0318 13:13:28.621060    2404 round_trippers.go:580]     Audit-Id: fafb9b4a-fe09-4837-9a0f-d13954a01d59
	I0318 13:13:28.621060    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:28.621060    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:28.621060    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:28.621060    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:28.621835    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:29.113875    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:29.113875    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:29.113875    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:29.113875    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:29.117497    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:29.117497    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:29.117497    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:29.117497    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:29.117497    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:29.117497    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:29 GMT
	I0318 13:13:29.117497    2404 round_trippers.go:580]     Audit-Id: f5de7786-47fe-4d19-b5dd-52c684437af5
	I0318 13:13:29.117969    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:29.118770    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:29.613909    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:29.613909    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:29.613909    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:29.613909    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:29.617534    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:29.617534    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:29.617534    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:29.617534    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:29 GMT
	I0318 13:13:29.617534    2404 round_trippers.go:580]     Audit-Id: 9d413313-522f-47a1-b82f-34955fa29ee3
	I0318 13:13:29.617534    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:29.617534    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:29.617534    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:29.618868    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:30.114768    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:30.114768    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:30.114768    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:30.114768    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:30.118263    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:30.119024    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:30.119024    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:30.119024    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:30 GMT
	I0318 13:13:30.119024    2404 round_trippers.go:580]     Audit-Id: 2482aebf-d42c-41fa-b462-8cc0ef08d122
	I0318 13:13:30.119024    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:30.119024    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:30.119024    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:30.119328    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:30.616342    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:30.616569    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:30.616569    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:30.616569    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:30.622663    2404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:13:30.622663    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:30.622663    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:30.622663    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:30.622663    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:30.622663    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:30 GMT
	I0318 13:13:30.622663    2404 round_trippers.go:580]     Audit-Id: 05d1fce0-5387-41d5-a701-89f5b95394f7
	I0318 13:13:30.622663    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:30.622663    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:30.623316    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:31.119648    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:31.119710    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:31.119710    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:31.119710    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:31.122529    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:13:31.122529    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:31.123469    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:31.123469    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:31.123509    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:31.123509    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:31.123509    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:31 GMT
	I0318 13:13:31.123509    2404 round_trippers.go:580]     Audit-Id: cb031b54-5c56-4d99-a5fd-4b6a1bc58270
	I0318 13:13:31.123649    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:31.619906    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:31.619906    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:31.619906    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:31.619906    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:31.623897    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:31.624267    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:31.624331    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:31.624355    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:31.624355    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:31 GMT
	I0318 13:13:31.624355    2404 round_trippers.go:580]     Audit-Id: 67715e89-b45f-486a-9d1b-138cdef26ef1
	I0318 13:13:31.624355    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:31.624355    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:31.624511    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:32.121948    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:32.121948    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:32.121948    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:32.121948    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:32.127199    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:13:32.127199    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:32.127199    2404 round_trippers.go:580]     Audit-Id: 44222899-2f66-4a76-b163-47b47757dd66
	I0318 13:13:32.127199    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:32.127199    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:32.127199    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:32.127199    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:32.127199    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:32 GMT
	I0318 13:13:32.127743    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:32.622765    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:32.622956    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:32.622956    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:32.622956    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:32.627711    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:32.627711    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:32.627711    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:32.627711    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:32.627711    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:32.627711    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:32 GMT
	I0318 13:13:32.627711    2404 round_trippers.go:580]     Audit-Id: 514238c1-0e4a-4324-b80b-60e8e1760dfe
	I0318 13:13:32.627711    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:32.628400    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:32.628564    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:33.121422    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:33.121479    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:33.121479    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:33.121479    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:33.124951    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:33.125216    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:33.125216    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:33.125216    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:33 GMT
	I0318 13:13:33.125216    2404 round_trippers.go:580]     Audit-Id: 718cd416-cf7d-4086-bf57-da12aaa07725
	I0318 13:13:33.125216    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:33.125216    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:33.125216    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:33.125216    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:33.621412    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:33.621412    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:33.621412    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:33.621412    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:33.625536    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:33.625871    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:33.625871    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:33.625871    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:33.625871    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:33.625871    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:33 GMT
	I0318 13:13:33.625871    2404 round_trippers.go:580]     Audit-Id: 616d2d35-a081-41af-8e98-11cf91e587d9
	I0318 13:13:33.625871    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:33.626140    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:34.122880    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:34.122880    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:34.123229    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:34.123229    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:34.127097    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:34.128177    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:34.128177    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:34.128271    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:34.128271    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:34 GMT
	I0318 13:13:34.128271    2404 round_trippers.go:580]     Audit-Id: f5dac830-65f6-4cbc-8b5d-d4e4678d11f4
	I0318 13:13:34.128271    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:34.128271    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:34.128373    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:34.609503    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:34.609609    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:34.609609    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:34.609609    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:34.612540    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:13:34.613314    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:34.613314    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:34.613314    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:34.613314    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:34.613314    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:34 GMT
	I0318 13:13:34.613314    2404 round_trippers.go:580]     Audit-Id: 41cfc544-6c0b-4a52-9224-408ac4574eef
	I0318 13:13:34.613314    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:34.613551    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:35.110550    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:35.110550    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:35.110550    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:35.110550    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:35.115218    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:35.115218    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:35.115218    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:35.115990    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:35 GMT
	I0318 13:13:35.115990    2404 round_trippers.go:580]     Audit-Id: 19515f85-26c7-45f1-8ec1-21ad333498e2
	I0318 13:13:35.115990    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:35.115990    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:35.115990    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:35.116236    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:35.116298    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:35.616647    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:35.616647    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:35.616727    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:35.616727    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:35.621075    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:35.621075    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:35.621075    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:35.621075    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:35.621461    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:35.621461    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:35.621461    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:35 GMT
	I0318 13:13:35.621461    2404 round_trippers.go:580]     Audit-Id: 3eee8fd8-677f-4eaa-9b8a-3fefcc79376e
	I0318 13:13:35.621638    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:36.115751    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:36.115751    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:36.115751    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:36.115751    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:36.119366    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:36.119366    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:36.119366    2404 round_trippers.go:580]     Audit-Id: 6b7d0bc0-35a8-45ea-b614-03006af9b462
	I0318 13:13:36.120267    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:36.120267    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:36.120267    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:36.120267    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:36.120267    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:36 GMT
	I0318 13:13:36.120469    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:36.614750    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:36.615098    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:36.615098    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:36.615098    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:36.618473    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:36.619301    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:36.619301    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:36.619301    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:36.619301    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:36.619301    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:36 GMT
	I0318 13:13:36.619301    2404 round_trippers.go:580]     Audit-Id: 862c2b71-f953-4c63-8ab2-9992de76b53c
	I0318 13:13:36.619301    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:36.619420    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:37.117864    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:37.118162    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:37.118233    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:37.118233    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:37.122044    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:13:37.122044    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:37.122044    2404 round_trippers.go:580]     Audit-Id: 83ac0fb8-2102-4e7a-becf-ea9389ef8992
	I0318 13:13:37.122044    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:37.122044    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:37.122044    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:37.122044    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:37.122044    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:37 GMT
	I0318 13:13:37.122378    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:37.122378    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:37.617772    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:37.617772    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:37.617772    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:37.617772    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:37.622386    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:37.622443    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:37.622443    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:37.622443    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:37 GMT
	I0318 13:13:37.622443    2404 round_trippers.go:580]     Audit-Id: c0cf4bf2-6f45-46e7-80d5-3d2028ec8c65
	I0318 13:13:37.622443    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:37.622443    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:37.622443    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:37.622443    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:38.119675    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:38.119985    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:38.119985    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:38.119985    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:38.123358    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:38.123923    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:38.123923    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:38.123923    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:38 GMT
	I0318 13:13:38.123923    2404 round_trippers.go:580]     Audit-Id: e739a76b-c25c-464c-a6ca-c75352a3eb07
	I0318 13:13:38.123923    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:38.123923    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:38.123923    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:38.124133    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:38.622815    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:38.622815    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:38.622815    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:38.622815    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:38.627163    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:38.627163    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:38.627163    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:38.627283    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:38.627283    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:38 GMT
	I0318 13:13:38.627283    2404 round_trippers.go:580]     Audit-Id: 190b9418-ab0d-4758-8239-2e91b0350583
	I0318 13:13:38.627283    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:38.627283    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:38.627512    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:39.112977    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:39.113027    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:39.113027    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:39.113061    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:39.117941    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:39.117996    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:39.118075    2404 round_trippers.go:580]     Audit-Id: b903b81c-0ae3-425a-a3a8-f1d6925a4168
	I0318 13:13:39.118075    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:39.118075    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:39.118075    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:39.118075    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:39.118075    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:39 GMT
	I0318 13:13:39.118245    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:39.621617    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:39.621617    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:39.621693    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:39.621693    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:39.625394    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:39.625394    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:39.625517    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:39.625517    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:39.625517    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:39.625517    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:39.625517    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:39 GMT
	I0318 13:13:39.625517    2404 round_trippers.go:580]     Audit-Id: 3acd5a4c-9d63-450b-9518-436d8e7d1d56
	I0318 13:13:39.625631    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:39.625631    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:40.123388    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:40.123570    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:40.123570    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:40.123570    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:40.126929    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:40.127361    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:40.127361    2404 round_trippers.go:580]     Audit-Id: 7a8be5f9-3424-448b-80b5-314ba785c370
	I0318 13:13:40.127361    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:40.127361    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:40.127361    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:40.127361    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:40.127361    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:40 GMT
	I0318 13:13:40.127506    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:40.610897    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:40.610897    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:40.610897    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:40.610897    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:40.614728    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:40.614728    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:40.614728    2404 round_trippers.go:580]     Audit-Id: e3b35c00-4b24-409c-a3c3-892a92806813
	I0318 13:13:40.614728    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:40.614728    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:40.614728    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:40.614728    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:40.614728    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:40 GMT
	I0318 13:13:40.615719    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:41.110727    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:41.110801    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:41.110801    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:41.110801    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:41.116452    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:13:41.117323    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:41.117323    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:41.117323    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:41.117517    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:41.117538    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:41 GMT
	I0318 13:13:41.117538    2404 round_trippers.go:580]     Audit-Id: e018457c-3925-4e36-bdbd-f7443bc127c1
	I0318 13:13:41.117538    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:41.117718    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:41.611456    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:41.611553    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:41.611553    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:41.611553    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:41.613902    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:13:41.613902    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:41.613902    2404 round_trippers.go:580]     Audit-Id: eb9c7f10-7928-4ab8-a087-bfa9017c40e2
	I0318 13:13:41.613902    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:41.613902    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:41.613902    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:41.613902    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:41.613902    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:41 GMT
	I0318 13:13:41.614689    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:42.114274    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:42.114659    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:42.114659    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:42.114659    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:42.119245    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:42.119245    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:42.119245    2404 round_trippers.go:580]     Audit-Id: 42ad8670-62e4-4707-ae8c-1541b2be3895
	I0318 13:13:42.119245    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:42.119245    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:42.119245    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:42.119245    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:42.119245    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:42 GMT
	I0318 13:13:42.119245    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:42.120087    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:42.616670    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:42.616670    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:42.616670    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:42.616670    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:42.622040    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:13:42.622652    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:42.622652    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:42.622652    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:42.622652    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:42 GMT
	I0318 13:13:42.622652    2404 round_trippers.go:580]     Audit-Id: 2e99fa7d-5c6b-4c0e-b0d6-11340a5119b6
	I0318 13:13:42.622652    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:42.622652    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:42.622845    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:43.116411    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:43.116411    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:43.116499    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:43.116499    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:43.118993    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:13:43.118993    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:43.118993    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:43.118993    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:43 GMT
	I0318 13:13:43.118993    2404 round_trippers.go:580]     Audit-Id: 80640aba-3ce5-406e-b596-70c04dd55bed
	I0318 13:13:43.119462    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:43.119462    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:43.119462    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:43.119636    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:43.617543    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:43.617543    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:43.617543    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:43.617543    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:43.621804    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:43.621804    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:43.621804    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:43.622825    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:43.622856    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:43.622856    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:43 GMT
	I0318 13:13:43.622856    2404 round_trippers.go:580]     Audit-Id: f752aa2f-ef65-437c-947a-fcaad58e9b0e
	I0318 13:13:43.622856    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:43.623131    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:44.119455    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:44.119693    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:44.119693    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:44.119693    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:44.129736    2404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 13:13:44.129736    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:44.129736    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:44.129736    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:44 GMT
	I0318 13:13:44.129736    2404 round_trippers.go:580]     Audit-Id: 377b99f8-c303-4aa0-ac63-b6da62fd5282
	I0318 13:13:44.129736    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:44.129736    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:44.129736    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:44.129946    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:44.130399    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:44.621784    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:44.621784    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:44.621881    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:44.621881    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:44.625236    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:44.626129    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:44.626129    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:44.626129    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:44 GMT
	I0318 13:13:44.626129    2404 round_trippers.go:580]     Audit-Id: bd41e20b-240a-4849-b7bc-6b40689ab12d
	I0318 13:13:44.626129    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:44.626129    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:44.626129    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:44.626282    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:45.121055    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:45.121055    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:45.121055    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:45.121055    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:45.125681    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:45.125802    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:45.125802    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:45.125802    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:45.125802    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:45.125802    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:45 GMT
	I0318 13:13:45.125802    2404 round_trippers.go:580]     Audit-Id: 2ef43652-1c83-4fd5-a143-9c63154c5196
	I0318 13:13:45.125802    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:45.126007    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:45.609779    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:45.609779    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:45.609779    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:45.609779    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:45.614176    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:45.614176    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:45.614176    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:45.614269    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:45.614269    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:45.614269    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:45.614269    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:45 GMT
	I0318 13:13:45.614269    2404 round_trippers.go:580]     Audit-Id: 765ad066-a458-49dd-a27b-e2b70a05a1d5
	I0318 13:13:45.614456    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:46.110426    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:46.110641    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:46.110641    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:46.110641    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:46.113899    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:46.113966    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:46.113966    2404 round_trippers.go:580]     Audit-Id: 346a2bea-3219-425d-ae99-2474463fd0c5
	I0318 13:13:46.113966    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:46.113966    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:46.113966    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:46.113966    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:46.113966    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:46 GMT
	I0318 13:13:46.114205    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:46.611621    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:46.611621    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:46.611621    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:46.611621    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:46.616497    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:46.616497    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:46.616497    2404 round_trippers.go:580]     Audit-Id: fd791b80-b134-49f6-acee-d89eba8b8226
	I0318 13:13:46.616497    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:46.616497    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:46.616497    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:46.616497    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:46.616497    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:46 GMT
	I0318 13:13:46.616497    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:46.617240    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:47.111795    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:47.111795    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:47.111795    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:47.111795    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:47.116165    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:47.116412    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:47.116412    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:47.116412    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:47.116412    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:47 GMT
	I0318 13:13:47.116412    2404 round_trippers.go:580]     Audit-Id: 59ff85b6-3ee9-4e56-8eb0-070acfcfa8e7
	I0318 13:13:47.116412    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:47.116412    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:47.117032    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:47.610124    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:47.610124    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:47.610124    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:47.610124    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:47.614778    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:47.614778    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:47.615187    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:47.615187    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:47 GMT
	I0318 13:13:47.615187    2404 round_trippers.go:580]     Audit-Id: 90c1b5b1-8c25-4603-9b44-50f506cf6924
	I0318 13:13:47.615240    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:47.615240    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:47.615265    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:47.615371    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:48.110830    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:48.110914    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:48.110914    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:48.110914    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:48.114250    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:48.114880    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:48.114880    2404 round_trippers.go:580]     Audit-Id: eefce027-4359-4ef1-b1ec-93aed862d1fe
	I0318 13:13:48.114880    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:48.114880    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:48.114880    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:48.114880    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:48.114956    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:48 GMT
	I0318 13:13:48.115113    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:48.622801    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:48.622801    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:48.622895    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:48.622895    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:48.626732    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:48.626785    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:48.626785    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:48.626785    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:48.626785    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:48 GMT
	I0318 13:13:48.626785    2404 round_trippers.go:580]     Audit-Id: 4f6c8c30-e5e6-40f0-8645-c84006e0e2f4
	I0318 13:13:48.626785    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:48.626785    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:48.626785    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:48.627339    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:49.124182    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:49.124182    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:49.124182    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:49.124182    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:49.128154    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:49.128154    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:49.128154    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:49.128154    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:49.128154    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:49 GMT
	I0318 13:13:49.128154    2404 round_trippers.go:580]     Audit-Id: 22b867bb-d0a8-43f6-8647-8d840af97e1d
	I0318 13:13:49.128154    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:49.128154    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:49.128441    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:49.613709    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:49.613709    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:49.614058    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:49.614058    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:49.617951    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:49.618469    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:49.618524    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:49.618524    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:49.618524    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:49.618524    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:49 GMT
	I0318 13:13:49.618524    2404 round_trippers.go:580]     Audit-Id: c88c6068-14ff-49e1-b4a4-94fd45316b56
	I0318 13:13:49.618524    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:49.618524    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:50.116316    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:50.116316    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:50.116316    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:50.116316    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:50.119625    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:50.119625    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:50.119625    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:50.120177    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:50 GMT
	I0318 13:13:50.120177    2404 round_trippers.go:580]     Audit-Id: 2084ebd7-e011-42fc-a6b7-262e1e4b3e1b
	I0318 13:13:50.120177    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:50.120177    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:50.120177    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:50.120333    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:50.615197    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:50.615197    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:50.615197    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:50.615197    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:50.619177    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:50.619177    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:50.619848    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:50.619848    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:50.619848    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:50.619848    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:50 GMT
	I0318 13:13:50.619848    2404 round_trippers.go:580]     Audit-Id: 32d4b324-484d-412f-a158-44ee00ca25a0
	I0318 13:13:50.619848    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:50.620051    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:51.117535    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:51.117535    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:51.117535    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:51.117535    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:51.122115    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:51.122568    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:51.122568    2404 round_trippers.go:580]     Audit-Id: 427e7bd5-152f-4a4c-bc38-86b68645ee37
	I0318 13:13:51.122568    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:51.122568    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:51.122568    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:51.122568    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:51.122568    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:51 GMT
	I0318 13:13:51.122568    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:51.123170    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:51.620065    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:51.620065    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:51.620065    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:51.620065    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:51.623346    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:51.624306    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:51.624306    2404 round_trippers.go:580]     Audit-Id: 26e99b0d-3296-4036-b383-014fab93b189
	I0318 13:13:51.624306    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:51.624351    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:51.624351    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:51.624351    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:51.624351    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:51 GMT
	I0318 13:13:51.624568    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:52.120817    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:52.121037    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:52.121037    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:52.121037    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:52.132503    2404 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 13:13:52.132503    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:52.132503    2404 round_trippers.go:580]     Audit-Id: 550f38e1-f87d-429d-8fca-c1b656ee0400
	I0318 13:13:52.132503    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:52.133340    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:52.133340    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:52.133340    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:52.133391    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:52 GMT
	I0318 13:13:52.133499    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:52.619055    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:52.619055    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:52.619055    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:52.619325    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:52.627408    2404 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 13:13:52.627408    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:52.627408    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:52 GMT
	I0318 13:13:52.627408    2404 round_trippers.go:580]     Audit-Id: c39adb31-86ea-450d-839e-ce30faba7eec
	I0318 13:13:52.627408    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:52.627408    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:52.627408    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:52.627408    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:52.627408    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:53.121080    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:53.121147    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:53.121147    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:53.121147    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:53.125116    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:53.125185    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:53.125185    2404 round_trippers.go:580]     Audit-Id: 7cf63368-2e54-4321-b7ce-ae7a20bfa85a
	I0318 13:13:53.125185    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:53.125185    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:53.125185    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:53.125298    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:53.125298    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:53 GMT
	I0318 13:13:53.125298    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:53.125298    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:53.609845    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:53.609845    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:53.609845    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:53.609845    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:53.614067    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:53.614904    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:53.614904    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:53.614904    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:53.614904    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:53.614904    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:53 GMT
	I0318 13:13:53.614904    2404 round_trippers.go:580]     Audit-Id: 0adf0c4f-4527-4543-8b20-a0fa432180c7
	I0318 13:13:53.614904    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:53.615119    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:54.111943    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:54.112001    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:54.112057    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:54.112057    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:54.116318    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:54.117434    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:54.117434    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:54 GMT
	I0318 13:13:54.117434    2404 round_trippers.go:580]     Audit-Id: 1347467c-ea74-4ad9-8016-1992b2634c17
	I0318 13:13:54.117531    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:54.117531    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:54.117531    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:54.117531    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:54.117677    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:54.613133    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:54.613133    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:54.613133    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:54.613133    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:54.617474    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:54.617550    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:54.617550    2404 round_trippers.go:580]     Audit-Id: 6aa492a0-7e18-4370-ba1e-6c3fef3f434b
	I0318 13:13:54.617550    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:54.617550    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:54.617550    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:54.617550    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:54.617550    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:54 GMT
	I0318 13:13:54.617550    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:55.115841    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:55.116001    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:55.116001    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:55.116001    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:55.119906    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:55.119906    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:55.119906    2404 round_trippers.go:580]     Audit-Id: b71b23f0-bf3d-468c-9ae0-3c18a03427d6
	I0318 13:13:55.119906    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:55.119906    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:55.119906    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:55.119906    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:55.119906    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:55 GMT
	I0318 13:13:55.119906    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:55.620230    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:55.620230    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:55.620230    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:55.620230    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:55.623994    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:55.623994    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:55.623994    2404 round_trippers.go:580]     Audit-Id: db34228e-c138-47dd-b9a8-47ff512a0b1b
	I0318 13:13:55.624759    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:55.624759    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:55.624759    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:55.624759    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:55.624759    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:55 GMT
	I0318 13:13:55.624838    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:55.624838    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:56.109514    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:56.109514    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:56.109514    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:56.109514    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:56.113010    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:56.113824    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:56.113824    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:56.113824    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:56.113824    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:56 GMT
	I0318 13:13:56.113824    2404 round_trippers.go:580]     Audit-Id: 62833683-edc2-45dd-9202-f51eb2d73301
	I0318 13:13:56.113824    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:56.113824    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:56.114068    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:56.612778    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:56.612778    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:56.612778    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:56.612778    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:56.616560    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:56.616560    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:56.616560    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:56 GMT
	I0318 13:13:56.616560    2404 round_trippers.go:580]     Audit-Id: d2b28e6d-52e6-40f3-b593-1b02e368e9de
	I0318 13:13:56.617580    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:56.617580    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:56.617613    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:56.617613    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:56.617654    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:57.113194    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:57.113194    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:57.113194    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:57.113194    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:57.117950    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:57.117950    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:57.117950    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:57.118105    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:57.118105    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:57.118105    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:57.118105    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:57 GMT
	I0318 13:13:57.118105    2404 round_trippers.go:580]     Audit-Id: 091f22d8-7e61-4a65-adc7-92eaf2423d29
	I0318 13:13:57.118292    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:57.614194    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:57.614432    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:57.614432    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:57.614432    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:57.619955    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:13:57.620034    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:57.620034    2404 round_trippers.go:580]     Audit-Id: cd702ef5-5512-468b-b705-3d4bfdefe55f
	I0318 13:13:57.620034    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:57.620034    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:57.620034    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:57.620034    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:57.620034    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:57 GMT
	I0318 13:13:57.620204    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:58.116792    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:58.116792    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:58.116792    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:58.116792    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:58.121622    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:58.121622    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:58.121622    2404 round_trippers.go:580]     Audit-Id: 9fb0d5e9-b8b0-4b22-9135-a563be7c693f
	I0318 13:13:58.121622    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:58.121622    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:58.121622    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:58.121622    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:58.121622    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:58 GMT
	I0318 13:13:58.121622    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:58.122202    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:58.618796    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:58.618796    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:58.618796    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:58.618796    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:58.622128    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:58.622531    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:58.622531    2404 round_trippers.go:580]     Audit-Id: 6e564437-be1a-4906-9e05-652aec7345ba
	I0318 13:13:58.622531    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:58.622531    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:58.622531    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:58.622531    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:58.622531    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:58 GMT
	I0318 13:13:58.622800    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:59.120475    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:59.120475    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:59.120475    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:59.120475    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:59.124980    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:59.125413    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:59.125413    2404 round_trippers.go:580]     Audit-Id: 8a28825c-93d1-4df0-8ae4-da9c560a6d38
	I0318 13:13:59.125413    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:59.125413    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:59.125413    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:59.125413    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:59.125413    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:59 GMT
	I0318 13:13:59.125567    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:59.623129    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:59.623208    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:59.623208    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:59.623208    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:59.627076    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:59.627276    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:59.627276    2404 round_trippers.go:580]     Audit-Id: 56c89a6c-fd62-41db-ab83-a3e63440160a
	I0318 13:13:59.627276    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:59.627276    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:59.627276    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:59.627276    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:59.627276    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:59 GMT
	I0318 13:13:59.627453    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:00.111367    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:00.111591    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:00.111591    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:00.111591    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:00.115522    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:00.115522    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:00.115522    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:00 GMT
	I0318 13:14:00.115522    2404 round_trippers.go:580]     Audit-Id: 0a8cf84e-b415-40ca-be28-3c1b5ba4aae0
	I0318 13:14:00.115522    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:00.115522    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:00.115522    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:00.115522    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:00.116564    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:00.609417    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:00.609417    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:00.609417    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:00.609417    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:00.612164    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:14:00.613016    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:00.613104    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:00.613104    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:00.613104    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:00 GMT
	I0318 13:14:00.613104    2404 round_trippers.go:580]     Audit-Id: f1898048-cb1a-4bd4-ad0a-5d636f58520f
	I0318 13:14:00.613250    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:00.613250    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:00.613250    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:00.614297    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:14:01.108790    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:01.108919    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:01.108919    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:01.108919    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:01.112270    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:01.112435    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:01.112435    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:01.112435    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:01.112435    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:01 GMT
	I0318 13:14:01.112435    2404 round_trippers.go:580]     Audit-Id: 35b4b404-477c-4a28-a1a3-ceb889a01d98
	I0318 13:14:01.112435    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:01.112435    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:01.112705    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:01.623018    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:01.623290    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:01.623290    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:01.623290    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:01.627142    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:01.627142    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:01.627142    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:01.627142    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:01.627142    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:01.627142    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:01.627142    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:01 GMT
	I0318 13:14:01.627142    2404 round_trippers.go:580]     Audit-Id: 5d716382-6cd3-46b9-83b6-2a62c163bc19
	I0318 13:14:01.627142    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:02.109390    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:02.109541    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:02.109541    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:02.109541    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:02.115144    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:14:02.115144    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:02.115144    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:02.115144    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:02.115144    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:02.115144    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:02.115144    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:02 GMT
	I0318 13:14:02.115144    2404 round_trippers.go:580]     Audit-Id: 1eeae02d-dfcb-4091-af4d-03374edb0640
	I0318 13:14:02.115732    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:02.610229    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:02.610229    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:02.610229    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:02.610229    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:02.613948    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:02.613948    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:02.613948    2404 round_trippers.go:580]     Audit-Id: 2c2dfe54-4308-4dbc-88b6-e9345e8b6ebf
	I0318 13:14:02.613948    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:02.613948    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:02.614466    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:02.614466    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:02.614466    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:02 GMT
	I0318 13:14:02.614734    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:02.614734    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:14:03.108846    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:03.108846    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:03.108846    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:03.108846    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:03.113783    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:03.113783    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:03.113783    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:03.113783    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:03.113783    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:03.113783    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:03 GMT
	I0318 13:14:03.113783    2404 round_trippers.go:580]     Audit-Id: 0055b9d6-514a-437a-a048-a14638a6fdb4
	I0318 13:14:03.113783    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:03.114771    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:03.622716    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:03.622716    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:03.622716    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:03.622716    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:03.627535    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:03.627535    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:03.627535    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:03.627626    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:03.627626    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:03.627626    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:03.627626    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:03 GMT
	I0318 13:14:03.627626    2404 round_trippers.go:580]     Audit-Id: 33342e5b-8ec6-4adf-ac6e-d2a0d9209059
	I0318 13:14:03.627727    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:04.112978    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:04.112978    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:04.112978    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:04.112978    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:04.116371    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:04.116371    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:04.117184    2404 round_trippers.go:580]     Audit-Id: ccbb5b25-f798-4f56-bcfc-bd9e9202ca01
	I0318 13:14:04.117184    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:04.117184    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:04.117184    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:04.117184    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:04.117184    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:04 GMT
	I0318 13:14:04.117257    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:04.611132    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:04.611257    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:04.611257    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:04.611257    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:04.615492    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:04.615492    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:04.615492    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:04.615492    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:04.615492    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:04.615492    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:04 GMT
	I0318 13:14:04.615492    2404 round_trippers.go:580]     Audit-Id: ac261459-9275-4046-9599-0b14817ccf1b
	I0318 13:14:04.615492    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:04.615492    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:04.615492    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:14:05.111537    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:05.111537    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:05.111671    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:05.111671    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:05.116346    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:05.116346    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:05.116414    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:05.116414    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:05.116414    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:05 GMT
	I0318 13:14:05.116414    2404 round_trippers.go:580]     Audit-Id: 7365457d-4e0f-4b04-af0c-06789fd168df
	I0318 13:14:05.116414    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:05.116414    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:05.116577    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:05.609299    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:05.609299    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:05.609299    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:05.609299    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:05.613082    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:05.613602    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:05.613602    2404 round_trippers.go:580]     Audit-Id: 10ffed27-6b85-4f94-8c2c-936af2228599
	I0318 13:14:05.613602    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:05.613602    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:05.613602    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:05.613602    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:05.613602    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:05 GMT
	I0318 13:14:05.613884    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:06.112506    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:06.112599    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:06.112599    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:06.112599    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:06.117098    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:06.117815    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:06.117815    2404 round_trippers.go:580]     Audit-Id: d32e0315-2298-4106-8b32-6af0adbdf275
	I0318 13:14:06.117815    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:06.117815    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:06.117815    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:06.117815    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:06.117815    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:06 GMT
	I0318 13:14:06.117976    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:06.614978    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:06.615043    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:06.615102    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:06.615102    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:06.619215    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:06.619215    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:06.619215    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:06.619215    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:06 GMT
	I0318 13:14:06.619215    2404 round_trippers.go:580]     Audit-Id: acefe811-9ba9-465c-b424-9f589c7cdf27
	I0318 13:14:06.619215    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:06.619215    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:06.619215    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:06.619215    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:06.619974    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:14:07.116365    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:07.116365    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:07.116365    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:07.116365    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:07.121778    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:14:07.122509    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:07.122592    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:07.122592    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:07.122592    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:07.122592    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:07.122592    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:07 GMT
	I0318 13:14:07.122592    2404 round_trippers.go:580]     Audit-Id: 5aa56302-b13e-438a-9f4f-f0d3ad1f44a4
	I0318 13:14:07.122699    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:07.616754    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:07.616754    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:07.616857    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:07.616857    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:07.621134    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:07.621134    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:07.621134    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:07.621134    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:07.621134    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:07 GMT
	I0318 13:14:07.621134    2404 round_trippers.go:580]     Audit-Id: 2423a904-d47b-4c09-9a80-378d610a77d0
	I0318 13:14:07.621134    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:07.621134    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:07.621134    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:08.115356    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:08.115356    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:08.115356    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:08.115356    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:08.119951    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:08.119951    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:08.120034    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:08.120034    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:08.120034    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:08.120034    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:08 GMT
	I0318 13:14:08.120034    2404 round_trippers.go:580]     Audit-Id: cff39d6e-951f-4292-ad92-60bda0598910
	I0318 13:14:08.120034    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:08.120034    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:08.616002    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:08.616002    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:08.616105    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:08.616105    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:08.619814    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:08.619814    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:08.619814    2404 round_trippers.go:580]     Audit-Id: fe5967e2-babd-4bfa-a9e2-6830630ad927
	I0318 13:14:08.619814    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:08.619814    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:08.619814    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:08.619814    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:08.619814    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:08 GMT
	I0318 13:14:08.620638    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:08.620953    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:14:09.118296    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:09.118296    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:09.118296    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:09.118432    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:09.123888    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:14:09.123888    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:09.123888    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:09.123888    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:09 GMT
	I0318 13:14:09.123888    2404 round_trippers.go:580]     Audit-Id: 89aecdff-983f-459e-9d25-7681ef78506e
	I0318 13:14:09.123888    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:09.123888    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:09.123888    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:09.124864    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:09.617498    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:09.617578    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:09.617578    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:09.617578    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:09.621060    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:09.621528    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:09.621590    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:09.621590    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:09 GMT
	I0318 13:14:09.621590    2404 round_trippers.go:580]     Audit-Id: 85ce4e06-af1e-47d0-b724-5145eb22490b
	I0318 13:14:09.621590    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:09.621590    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:09.621590    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:09.621829    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:10.118140    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:10.118213    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:10.118213    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:10.118213    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:10.121065    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:14:10.122149    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:10.122149    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:10 GMT
	I0318 13:14:10.122149    2404 round_trippers.go:580]     Audit-Id: a63c5583-c706-4a4f-8c15-b7d39e591b41
	I0318 13:14:10.122149    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:10.122149    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:10.122149    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:10.122149    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:10.122149    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:10.615703    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:10.615703    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:10.615703    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:10.615703    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:10.620884    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:14:10.621215    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:10.621215    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:10.621215    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:10.621215    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:10.621215    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:10.621215    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:10 GMT
	I0318 13:14:10.621215    2404 round_trippers.go:580]     Audit-Id: 660719e6-9825-42ec-9a95-ac1372bcdbe3
	I0318 13:14:10.622041    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:10.622041    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:14:11.115049    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:11.115049    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.115049    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.115049    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.118694    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:11.119683    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.119703    2404 round_trippers.go:580]     Audit-Id: 5721efc7-a2b1-47ab-818d-532c411fe139
	I0318 13:14:11.119703    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.119703    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.119703    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.119703    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.119703    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.119865    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:11.616455    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:11.616455    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.616455    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.616455    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.625972    2404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 13:14:11.625972    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.625972    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.625972    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.625972    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.625972    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.625972    2404 round_trippers.go:580]     Audit-Id: e8be0c01-5a88-4482-900f-b5ddcb063c2c
	I0318 13:14:11.625972    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.626948    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2152","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3933 chars]
	I0318 13:14:11.626948    2404 node_ready.go:49] node "multinode-894400-m02" has status "Ready":"True"
	I0318 13:14:11.626948    2404 node_ready.go:38] duration metric: took 47.5187391s for node "multinode-894400-m02" to be "Ready" ...
	I0318 13:14:11.626948    2404 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:14:11.626948    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods
	I0318 13:14:11.626948    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.626948    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.626948    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.637960    2404 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 13:14:11.637960    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.637960    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.637960    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.637960    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.637960    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.637960    2404 round_trippers.go:580]     Audit-Id: 40322691-ad10-4ef3-8af2-011491952fd4
	I0318 13:14:11.637960    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.640888    2404 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2152"},"items":[{"metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1918","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82613 chars]
	I0318 13:14:11.644284    2404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-456tm" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:11.644284    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:14:11.644284    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.644284    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.644284    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.648934    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:11.648934    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.649784    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.649784    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.649784    2404 round_trippers.go:580]     Audit-Id: 93b5f16f-e9dd-4dee-9595-8a4101fcf39b
	I0318 13:14:11.649784    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.649784    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.649784    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.650145    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1918","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I0318 13:14:11.650702    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:14:11.650751    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.650751    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.650823    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.652975    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:14:11.652975    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.652975    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.652975    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.652975    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.652975    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.652975    2404 round_trippers.go:580]     Audit-Id: f4a52d1f-24ad-42cd-b7da-81450a9b7b10
	I0318 13:14:11.652975    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.654024    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:14:11.654024    2404 pod_ready.go:92] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"True"
	I0318 13:14:11.654024    2404 pod_ready.go:81] duration metric: took 9.7401ms for pod "coredns-5dd5756b68-456tm" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:11.654024    2404 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:11.654024    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-894400
	I0318 13:14:11.654024    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.654024    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.654024    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.658984    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:11.658984    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.659679    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.659704    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.659704    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.659704    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.659704    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.659704    2404 round_trippers.go:580]     Audit-Id: d7f08ab6-973b-40d6-b2bd-258c37ded939
	I0318 13:14:11.659920    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-894400","namespace":"kube-system","uid":"d4c040b9-a604-4a0d-80ee-7436541af60c","resourceVersion":"1841","creationTimestamp":"2024-03-18T13:09:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.130.156:2379","kubernetes.io/config.hash":"743a549b698f93b8586a236f83c90556","kubernetes.io/config.mirror":"743a549b698f93b8586a236f83c90556","kubernetes.io/config.seen":"2024-03-18T13:09:42.924670260Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:09:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5873 chars]
	I0318 13:14:11.660524    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:14:11.660587    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.660587    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.660587    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.665205    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:11.665263    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.665326    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.665326    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.665326    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.665326    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.665326    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.665378    2404 round_trippers.go:580]     Audit-Id: 7ffe996e-b31e-438f-8816-42078c3ee8d3
	I0318 13:14:11.665888    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:14:11.666287    2404 pod_ready.go:92] pod "etcd-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 13:14:11.666375    2404 pod_ready.go:81] duration metric: took 12.3506ms for pod "etcd-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:11.666375    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:11.666494    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-894400
	I0318 13:14:11.666494    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.666494    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.666494    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.674116    2404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 13:14:11.674958    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.674958    2404 round_trippers.go:580]     Audit-Id: 0a850913-bf78-407b-af9a-fa7f430ec082
	I0318 13:14:11.674958    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.674958    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.674958    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.674958    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.674958    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.675219    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-894400","namespace":"kube-system","uid":"46152b8e-0bda-427e-a1ad-c79506b56763","resourceVersion":"1812","creationTimestamp":"2024-03-18T13:09:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.30.130.156:8443","kubernetes.io/config.hash":"6096c2227c4230453f65f86ebdcd0d95","kubernetes.io/config.mirror":"6096c2227c4230453f65f86ebdcd0d95","kubernetes.io/config.seen":"2024-03-18T13:09:42.869643374Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:09:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7409 chars]
	I0318 13:14:11.675219    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:14:11.675219    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.675219    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.675219    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.678244    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:14:11.678244    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.678244    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.678244    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.678244    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.678321    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.678321    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.678321    2404 round_trippers.go:580]     Audit-Id: afabb18c-3ffb-485a-984d-847c1ede82da
	I0318 13:14:11.678426    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:14:11.678822    2404 pod_ready.go:92] pod "kube-apiserver-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 13:14:11.678893    2404 pod_ready.go:81] duration metric: took 12.5177ms for pod "kube-apiserver-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:11.678893    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:11.678964    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-894400
	I0318 13:14:11.679029    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.679029    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.679029    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.682114    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:11.682114    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.682114    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.682114    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.682114    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.682114    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.682114    2404 round_trippers.go:580]     Audit-Id: 29717c2b-8183-46a4-aa4b-ad5a916adccd
	I0318 13:14:11.682114    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.683105    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-894400","namespace":"kube-system","uid":"4ad5fc15-53ba-4ebb-9a63-b8572cd9c834","resourceVersion":"1813","creationTimestamp":"2024-03-18T12:47:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d340aced56ba169ecac1e3ac58ad57fe","kubernetes.io/config.mirror":"d340aced56ba169ecac1e3ac58ad57fe","kubernetes.io/config.seen":"2024-03-18T12:47:20.228444892Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I0318 13:14:11.683105    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:14:11.683105    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.683105    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.683105    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.687124    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:11.687124    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.687124    2404 round_trippers.go:580]     Audit-Id: 2cb449ad-f368-4ed0-b70a-3b6aeda7e800
	I0318 13:14:11.687124    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.687124    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.687124    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.687124    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.687124    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.687124    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:14:11.687124    2404 pod_ready.go:92] pod "kube-controller-manager-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 13:14:11.687124    2404 pod_ready.go:81] duration metric: took 8.2309ms for pod "kube-controller-manager-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:11.688127    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-745w9" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:11.819536    2404 request.go:629] Waited for 131.4083ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-745w9
	I0318 13:14:11.820005    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-745w9
	I0318 13:14:11.820079    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.820079    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.820079    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.822351    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:14:11.823314    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.823314    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.823314    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.823314    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.823314    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.823314    2404 round_trippers.go:580]     Audit-Id: d079bbef-e86c-48a2-b5f8-75a6bc5e0870
	I0318 13:14:11.823314    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.823615    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-745w9","generateName":"kube-proxy-","namespace":"kube-system","uid":"d385fe06-f516-440d-b9ed-37c2d4a81050","resourceVersion":"1698","creationTimestamp":"2024-03-18T12:55:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"da714af0-88aa-4fc4-b73d-4de837f06114","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da714af0-88aa-4fc4-b73d-4de837f06114\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5771 chars]
	I0318 13:14:12.024607    2404 request.go:629] Waited for 200.3709ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m03
	I0318 13:14:12.024607    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m03
	I0318 13:14:12.024866    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:12.024866    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:12.024866    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:12.027062    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:14:12.028143    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:12.028143    2404 round_trippers.go:580]     Audit-Id: f9ed5858-2d70-497f-afff-f805ba926149
	I0318 13:14:12.028143    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:12.028143    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:12.028143    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:12.028143    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:12.028143    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:12 GMT
	I0318 13:14:12.028143    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m03","uid":"1f8e594e-d4cc-4247-8064-01ac67ea2b15","resourceVersion":"1855","creationTimestamp":"2024-03-18T13:05:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_05_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:05:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4400 chars]
	I0318 13:14:12.028821    2404 pod_ready.go:97] node "multinode-894400-m03" hosting pod "kube-proxy-745w9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400-m03" has status "Ready":"Unknown"
	I0318 13:14:12.028947    2404 pod_ready.go:81] duration metric: took 340.8181ms for pod "kube-proxy-745w9" in "kube-system" namespace to be "Ready" ...
	E0318 13:14:12.028947    2404 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-894400-m03" hosting pod "kube-proxy-745w9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400-m03" has status "Ready":"Unknown"
	I0318 13:14:12.029001    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8bdmn" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:12.228270    2404 request.go:629] Waited for 199.2674ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8bdmn
	I0318 13:14:12.228477    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8bdmn
	I0318 13:14:12.228477    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:12.228477    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:12.228477    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:12.232048    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:12.232292    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:12.232292    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:12.232292    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:12.232292    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:12.232292    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:12 GMT
	I0318 13:14:12.232292    2404 round_trippers.go:580]     Audit-Id: 20faeed7-e8f9-4e56-8283-f9b496110406
	I0318 13:14:12.232292    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:12.232896    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8bdmn","generateName":"kube-proxy-","namespace":"kube-system","uid":"5c266b8a-9665-4365-93c6-2b5f1699d3ef","resourceVersion":"2116","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"da714af0-88aa-4fc4-b73d-4de837f06114","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da714af0-88aa-4fc4-b73d-4de837f06114\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5546 chars]
	I0318 13:14:12.431342    2404 request.go:629] Waited for 197.7133ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:12.431577    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:12.431577    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:12.431577    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:12.431577    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:12.435315    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:12.435496    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:12.435496    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:12.435496    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:12 GMT
	I0318 13:14:12.435496    2404 round_trippers.go:580]     Audit-Id: eb7e4353-0433-4403-9feb-ed2fbf34f6ab
	I0318 13:14:12.435496    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:12.435496    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:12.435496    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:12.435787    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2155","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3813 chars]
	I0318 13:14:12.435862    2404 pod_ready.go:92] pod "kube-proxy-8bdmn" in "kube-system" namespace has status "Ready":"True"
	I0318 13:14:12.435862    2404 pod_ready.go:81] duration metric: took 406.8577ms for pod "kube-proxy-8bdmn" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:12.435862    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mc5tv" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:12.619046    2404 request.go:629] Waited for 182.9274ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mc5tv
	I0318 13:14:12.619248    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mc5tv
	I0318 13:14:12.619285    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:12.619308    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:12.619308    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:12.622929    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:12.623808    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:12.623808    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:12.623808    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:12.623808    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:12 GMT
	I0318 13:14:12.623808    2404 round_trippers.go:580]     Audit-Id: fa7c5125-bd6b-4d47-9c1f-80fd49c7b7cc
	I0318 13:14:12.623808    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:12.623808    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:12.623808    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mc5tv","generateName":"kube-proxy-","namespace":"kube-system","uid":"0afe25f8-cbd6-412b-8698-7b547d1d49ca","resourceVersion":"1799","creationTimestamp":"2024-03-18T12:47:41Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"da714af0-88aa-4fc4-b73d-4de837f06114","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da714af0-88aa-4fc4-b73d-4de837f06114\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0318 13:14:12.822087    2404 request.go:629] Waited for 197.0373ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:14:12.822354    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:14:12.822354    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:12.822354    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:12.822354    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:12.826458    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:12.826458    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:12.826458    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:12.827068    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:12.827068    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:12.827068    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:12 GMT
	I0318 13:14:12.827068    2404 round_trippers.go:580]     Audit-Id: f443ba05-744f-46c0-a46e-1fe1733e628a
	I0318 13:14:12.827068    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:12.827546    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:14:12.827965    2404 pod_ready.go:92] pod "kube-proxy-mc5tv" in "kube-system" namespace has status "Ready":"True"
	I0318 13:14:12.827965    2404 pod_ready.go:81] duration metric: took 392.1005ms for pod "kube-proxy-mc5tv" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:12.828115    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:13.025603    2404 request.go:629] Waited for 197.3055ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-894400
	I0318 13:14:13.025788    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-894400
	I0318 13:14:13.025788    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:13.025788    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:13.025788    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:13.029574    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:13.030436    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:13.030436    2404 round_trippers.go:580]     Audit-Id: 08083a80-00eb-4108-858c-03576a1f71b8
	I0318 13:14:13.030436    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:13.030436    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:13.030436    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:13.030436    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:13.030436    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:13 GMT
	I0318 13:14:13.030436    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-894400","namespace":"kube-system","uid":"f47703ce-5a82-466e-ac8e-ef6b8cc07e6c","resourceVersion":"1822","creationTimestamp":"2024-03-18T12:47:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1c745e9b917877b1ff3c90ed02e9a79a","kubernetes.io/config.mirror":"1c745e9b917877b1ff3c90ed02e9a79a","kubernetes.io/config.seen":"2024-03-18T12:47:28.428225123Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I0318 13:14:13.228623    2404 request.go:629] Waited for 197.2127ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:14:13.228840    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:14:13.228840    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:13.228930    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:13.228930    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:13.235271    2404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:14:13.235271    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:13.235271    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:13.235271    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:13.235271    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:13 GMT
	I0318 13:14:13.235271    2404 round_trippers.go:580]     Audit-Id: 5a4436b1-d920-490f-b509-649039179d70
	I0318 13:14:13.235271    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:13.235271    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:13.235271    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:14:13.236151    2404 pod_ready.go:92] pod "kube-scheduler-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 13:14:13.236151    2404 pod_ready.go:81] duration metric: took 408.0332ms for pod "kube-scheduler-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:13.236268    2404 pod_ready.go:38] duration metric: took 1.6093081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:14:13.236268    2404 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:14:13.248610    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:14:13.275039    2404 system_svc.go:56] duration metric: took 38.7704ms WaitForService to wait for kubelet
	I0318 13:14:13.275039    2404 kubeadm.go:576] duration metric: took 49.4839183s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:14:13.275039    2404 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:14:13.429371    2404 request.go:629] Waited for 154.1021ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes
	I0318 13:14:13.429537    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes
	I0318 13:14:13.429537    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:13.429537    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:13.429537    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:13.433336    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:13.433336    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:13.433813    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:13 GMT
	I0318 13:14:13.433813    2404 round_trippers.go:580]     Audit-Id: b2ae4003-8149-4408-8821-50283f2c82f2
	I0318 13:14:13.433813    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:13.433813    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:13.433813    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:13.433813    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:13.434364    2404 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2158"},"items":[{"metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15489 chars]
	I0318 13:14:13.435243    2404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:14:13.435320    2404 node_conditions.go:123] node cpu capacity is 2
	I0318 13:14:13.435320    2404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:14:13.435320    2404 node_conditions.go:123] node cpu capacity is 2
	I0318 13:14:13.435320    2404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:14:13.435320    2404 node_conditions.go:123] node cpu capacity is 2
	I0318 13:14:13.435320    2404 node_conditions.go:105] duration metric: took 160.28ms to run NodePressure ...
	I0318 13:14:13.435320    2404 start.go:240] waiting for startup goroutines ...
	I0318 13:14:13.435426    2404 start.go:254] writing updated cluster config ...
	I0318 13:14:13.439615    2404 out.go:177] 
	I0318 13:14:13.442619    2404 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:14:13.453409    2404 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:14:13.453409    2404 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\config.json ...
	I0318 13:14:13.458870    2404 out.go:177] * Starting "multinode-894400-m03" worker node in "multinode-894400" cluster
	I0318 13:14:13.462497    2404 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:14:13.462497    2404 cache.go:56] Caching tarball of preloaded images
	I0318 13:14:13.462497    2404 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 13:14:13.462497    2404 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:14:13.463889    2404 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\config.json ...
	I0318 13:14:13.469212    2404 start.go:360] acquireMachinesLock for multinode-894400-m03: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:14:13.469466    2404 start.go:364] duration metric: took 253.5µs to acquireMachinesLock for "multinode-894400-m03"
	I0318 13:14:13.469629    2404 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:14:13.469629    2404 fix.go:54] fixHost starting: m03
	I0318 13:14:13.469629    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m03 ).state
	I0318 13:14:15.495083    2404 main.go:141] libmachine: [stdout =====>] : Off
	
	I0318 13:14:15.495083    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:14:15.495866    2404 fix.go:112] recreateIfNeeded on multinode-894400-m03: state=Stopped err=<nil>
	W0318 13:14:15.495866    2404 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:14:15.499589    2404 out.go:177] * Restarting existing hyperv VM for "multinode-894400-m03" ...
	I0318 13:14:15.501877    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-894400-m03
	I0318 13:14:18.449755    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:14:18.449755    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:14:18.449755    2404 main.go:141] libmachine: Waiting for host to start...
	I0318 13:14:18.450089    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m03 ).state
	I0318 13:14:20.655376    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:14:20.655376    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:14:20.655813    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 13:14:23.166805    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:14:23.167442    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:14:24.181679    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m03 ).state
	I0318 13:14:26.301489    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:14:26.302430    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:14:26.302430    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 13:14:28.744343    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:14:28.744521    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:14:29.751246    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m03 ).state

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-894400" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-894400
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-894400: context deadline exceeded (0s)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-894400" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-894400	172.30.129.141
multinode-894400-m02	172.30.140.66
multinode-894400-m03	172.30.137.140

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-894400 -n multinode-894400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-894400 -n multinode-894400: (11.7162027s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 logs -n 25: (11.1485857s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-894400 cp testdata\cp-test.txt                                                                                 | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:58 UTC | 18 Mar 24 12:58 UTC |
	|         | multinode-894400-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-894400 ssh -n                                                                                                  | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:58 UTC | 18 Mar 24 12:58 UTC |
	|         | multinode-894400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-894400 cp multinode-894400-m02:/home/docker/cp-test.txt                                                        | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:58 UTC | 18 Mar 24 12:58 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile2306069937\001\cp-test_multinode-894400-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-894400 ssh -n                                                                                                  | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:58 UTC | 18 Mar 24 12:59 UTC |
	|         | multinode-894400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-894400 cp multinode-894400-m02:/home/docker/cp-test.txt                                                        | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:59 UTC | 18 Mar 24 12:59 UTC |
	|         | multinode-894400:/home/docker/cp-test_multinode-894400-m02_multinode-894400.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-894400 ssh -n                                                                                                  | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:59 UTC | 18 Mar 24 12:59 UTC |
	|         | multinode-894400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-894400 ssh -n multinode-894400 sudo cat                                                                        | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:59 UTC | 18 Mar 24 12:59 UTC |
	|         | /home/docker/cp-test_multinode-894400-m02_multinode-894400.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-894400 cp multinode-894400-m02:/home/docker/cp-test.txt                                                        | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:59 UTC | 18 Mar 24 12:59 UTC |
	|         | multinode-894400-m03:/home/docker/cp-test_multinode-894400-m02_multinode-894400-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-894400 ssh -n                                                                                                  | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 12:59 UTC | 18 Mar 24 13:00 UTC |
	|         | multinode-894400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-894400 ssh -n multinode-894400-m03 sudo cat                                                                    | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 13:00 UTC | 18 Mar 24 13:00 UTC |
	|         | /home/docker/cp-test_multinode-894400-m02_multinode-894400-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-894400 cp testdata\cp-test.txt                                                                                 | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 13:00 UTC | 18 Mar 24 13:00 UTC |
	|         | multinode-894400-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-894400 ssh -n                                                                                                  | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 13:00 UTC | 18 Mar 24 13:00 UTC |
	|         | multinode-894400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-894400 cp multinode-894400-m03:/home/docker/cp-test.txt                                                        | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 13:00 UTC | 18 Mar 24 13:00 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile2306069937\001\cp-test_multinode-894400-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-894400 ssh -n                                                                                                  | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 13:00 UTC | 18 Mar 24 13:00 UTC |
	|         | multinode-894400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-894400 cp multinode-894400-m03:/home/docker/cp-test.txt                                                        | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 13:00 UTC | 18 Mar 24 13:01 UTC |
	|         | multinode-894400:/home/docker/cp-test_multinode-894400-m03_multinode-894400.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-894400 ssh -n                                                                                                  | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 13:01 UTC | 18 Mar 24 13:01 UTC |
	|         | multinode-894400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-894400 ssh -n multinode-894400 sudo cat                                                                        | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 13:01 UTC | 18 Mar 24 13:01 UTC |
	|         | /home/docker/cp-test_multinode-894400-m03_multinode-894400.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-894400 cp multinode-894400-m03:/home/docker/cp-test.txt                                                        | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 13:01 UTC | 18 Mar 24 13:01 UTC |
	|         | multinode-894400-m02:/home/docker/cp-test_multinode-894400-m03_multinode-894400-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-894400 ssh -n                                                                                                  | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 13:01 UTC | 18 Mar 24 13:01 UTC |
	|         | multinode-894400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-894400 ssh -n multinode-894400-m02 sudo cat                                                                    | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 13:01 UTC | 18 Mar 24 13:01 UTC |
	|         | /home/docker/cp-test_multinode-894400-m03_multinode-894400-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-894400 node stop m03                                                                                           | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 13:01 UTC | 18 Mar 24 13:02 UTC |
	| node    | multinode-894400 node start                                                                                              | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 13:03 UTC | 18 Mar 24 13:05 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-894400                                                                                                 | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 13:06 UTC |                     |
	| stop    | -p multinode-894400                                                                                                      | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 13:06 UTC | 18 Mar 24 13:07 UTC |
	| start   | -p multinode-894400                                                                                                      | multinode-894400 | minikube3\jenkins | v1.32.0 | 18 Mar 24 13:07 UTC |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:07:45
	Running on machine: minikube3
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:07:45.061560    2404 out.go:291] Setting OutFile to fd 884 ...
	I0318 13:07:45.062552    2404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:07:45.062552    2404 out.go:304] Setting ErrFile to fd 980...
	I0318 13:07:45.062552    2404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:07:45.086104    2404 out.go:298] Setting JSON to false
	I0318 13:07:45.089099    2404 start.go:129] hostinfo: {"hostname":"minikube3","uptime":315842,"bootTime":1710451423,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0318 13:07:45.090082    2404 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:07:45.155572    2404 out.go:177] * [multinode-894400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0318 13:07:45.358080    2404 notify.go:220] Checking for updates...
	I0318 13:07:45.406333    2404 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 13:07:45.602067    2404 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:07:45.751151    2404 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0318 13:07:45.795260    2404 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 13:07:45.966387    2404 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:07:45.995191    2404 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:07:45.995464    2404 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:07:51.334489    2404 out.go:177] * Using the hyperv driver based on existing profile
	I0318 13:07:51.354391    2404 start.go:297] selected driver: hyperv
	I0318 13:07:51.354391    2404 start.go:901] validating driver "hyperv" against &{Name:multinode-894400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.28.4 ClusterName:multinode-894400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.129.141 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.30.140.66 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.30.137.140 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:07:51.355451    2404 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:07:51.407703    2404 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:07:51.407864    2404 cni.go:84] Creating CNI manager for ""
	I0318 13:07:51.408004    2404 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 13:07:51.408046    2404 start.go:340] cluster config:
	{Name:multinode-894400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-894400 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.129.141 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.30.140.66 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.30.137.140 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provision
er:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:07:51.408046    2404 iso.go:125] acquiring lock: {Name:mkefa7a887b9de67c22e0418da52288a5b488924 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:07:51.564477    2404 out.go:177] * Starting "multinode-894400" primary control-plane node in "multinode-894400" cluster
	I0318 13:07:51.693483    2404 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:07:51.694577    2404 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 13:07:51.694727    2404 cache.go:56] Caching tarball of preloaded images
	I0318 13:07:51.695169    2404 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 13:07:51.695416    2404 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:07:51.695736    2404 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\config.json ...
	I0318 13:07:51.698526    2404 start.go:360] acquireMachinesLock for multinode-894400: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:07:51.698526    2404 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-894400"
	I0318 13:07:51.699058    2404 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:07:51.699374    2404 fix.go:54] fixHost starting: 
	I0318 13:07:51.699539    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:07:54.199225    2404 main.go:141] libmachine: [stdout =====>] : Off
	
	I0318 13:07:54.200091    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:07:54.200153    2404 fix.go:112] recreateIfNeeded on multinode-894400: state=Stopped err=<nil>
	W0318 13:07:54.200153    2404 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:07:54.538991    2404 out.go:177] * Restarting existing hyperv VM for "multinode-894400" ...
	I0318 13:07:54.545864    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-894400
	I0318 13:07:57.546532    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:07:57.546957    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:07:57.546957    2404 main.go:141] libmachine: Waiting for host to start...
	I0318 13:07:57.547040    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:07:59.610987    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:07:59.611165    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:07:59.611165    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:01.954297    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:08:01.954781    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:02.968268    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:05.123037    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:05.123684    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:05.123751    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:07.471155    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:08:07.471340    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:08.486928    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:10.578478    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:10.578755    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:10.578755    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:12.960616    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:08:12.961816    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:13.964258    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:16.051492    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:16.051492    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:16.051703    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:18.403955    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:08:18.403955    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:19.418394    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:21.591796    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:21.591796    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:21.591796    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:23.944375    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:08:23.945033    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:23.947675    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:25.950616    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:25.950616    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:25.950616    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:28.288833    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:08:28.289348    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:28.289642    2404 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\config.json ...
	I0318 13:08:28.292179    2404 machine.go:94] provisionDockerMachine start ...
	I0318 13:08:28.292317    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:30.253975    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:30.253975    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:30.253975    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:32.591711    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:08:32.592750    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:32.597818    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:08:32.598554    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.156 22 <nil> <nil>}
	I0318 13:08:32.598554    2404 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:08:32.728683    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:08:32.728683    2404 buildroot.go:166] provisioning hostname "multinode-894400"
	I0318 13:08:32.728683    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:34.671039    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:34.671039    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:34.671039    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:36.995853    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:08:36.995936    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:37.000875    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:08:37.001637    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.156 22 <nil> <nil>}
	I0318 13:08:37.001637    2404 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-894400 && echo "multinode-894400" | sudo tee /etc/hostname
	I0318 13:08:37.163030    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-894400
	
	I0318 13:08:37.163121    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:39.133266    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:39.133266    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:39.133266    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:41.468784    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:08:41.468784    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:41.473865    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:08:41.473990    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.156 22 <nil> <nil>}
	I0318 13:08:41.473990    2404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-894400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-894400/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-894400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:08:41.622362    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:08:41.622412    2404 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0318 13:08:41.622412    2404 buildroot.go:174] setting up certificates
	I0318 13:08:41.622494    2404 provision.go:84] configureAuth start
	I0318 13:08:41.622549    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:43.566483    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:43.566483    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:43.567401    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:45.896227    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:08:45.896227    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:45.896425    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:47.849564    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:47.849564    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:47.850470    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:50.150060    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:08:50.150060    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:50.150182    2404 provision.go:143] copyHostCerts
	I0318 13:08:50.150363    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0318 13:08:50.150718    2404 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0318 13:08:50.150859    2404 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0318 13:08:50.151181    2404 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 13:08:50.152316    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0318 13:08:50.152592    2404 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0318 13:08:50.152680    2404 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0318 13:08:50.153033    2404 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 13:08:50.154063    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0318 13:08:50.154568    2404 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0318 13:08:50.154658    2404 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0318 13:08:50.155120    2404 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 13:08:50.156099    2404 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-894400 san=[127.0.0.1 172.30.130.156 localhost minikube multinode-894400]
	I0318 13:08:50.556381    2404 provision.go:177] copyRemoteCerts
	I0318 13:08:50.568350    2404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:08:50.568448    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:52.521135    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:52.521994    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:52.521994    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:54.835487    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:08:54.835487    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:54.836091    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\id_rsa Username:docker}
	I0318 13:08:54.947574    2404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3790934s)
	I0318 13:08:54.947574    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 13:08:54.947574    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:08:54.992137    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 13:08:54.992137    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0318 13:08:55.034093    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 13:08:55.034588    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:08:55.080638    2404 provision.go:87] duration metric: took 13.4580448s to configureAuth
	I0318 13:08:55.080638    2404 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:08:55.081315    2404 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:08:55.081315    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:08:57.052754    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:08:57.052754    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:57.053568    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:08:59.418518    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:08:59.418518    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:08:59.425154    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:08:59.425728    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.156 22 <nil> <nil>}
	I0318 13:08:59.425728    2404 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 13:08:59.564178    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 13:08:59.564270    2404 buildroot.go:70] root file system type: tmpfs
	I0318 13:08:59.564583    2404 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 13:08:59.564677    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:09:01.515886    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:09:01.516747    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:01.516747    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:09:03.892924    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:09:03.892924    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:03.899565    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:09:03.899785    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.156 22 <nil> <nil>}
	I0318 13:09:03.899785    2404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 13:09:04.044182    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 13:09:04.044281    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:09:06.009362    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:09:06.009598    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:06.009598    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:09:08.373329    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:09:08.373329    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:08.380939    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:09:08.380939    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.156 22 <nil> <nil>}
	I0318 13:09:08.380939    2404 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 13:09:10.702931    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 13:09:10.703031    2404 machine.go:97] duration metric: took 42.4103689s to provisionDockerMachine
	I0318 13:09:10.703031    2404 start.go:293] postStartSetup for "multinode-894400" (driver="hyperv")
	I0318 13:09:10.703031    2404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:09:10.714806    2404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:09:10.714806    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:09:12.690757    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:09:12.690757    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:12.690757    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:09:15.002562    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:09:15.002562    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:15.003535    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\id_rsa Username:docker}
	I0318 13:09:15.104738    2404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3898466s)
	I0318 13:09:15.116973    2404 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:09:15.123850    2404 command_runner.go:130] > NAME=Buildroot
	I0318 13:09:15.123850    2404 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0318 13:09:15.123850    2404 command_runner.go:130] > ID=buildroot
	I0318 13:09:15.123850    2404 command_runner.go:130] > VERSION_ID=2023.02.9
	I0318 13:09:15.123850    2404 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0318 13:09:15.124023    2404 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:09:15.124023    2404 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0318 13:09:15.124157    2404 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0318 13:09:15.125445    2404 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> 134242.pem in /etc/ssl/certs
	I0318 13:09:15.125557    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /etc/ssl/certs/134242.pem
	I0318 13:09:15.136870    2404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:09:15.156887    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /etc/ssl/certs/134242.pem (1708 bytes)
	I0318 13:09:15.198196    2404 start.go:296] duration metric: took 4.4951325s for postStartSetup
	I0318 13:09:15.198319    2404 fix.go:56] duration metric: took 1m23.4985048s for fixHost
	I0318 13:09:15.198425    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:09:17.145092    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:09:17.145950    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:17.146060    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:09:19.522583    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:09:19.522637    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:19.527330    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:09:19.527723    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.156 22 <nil> <nil>}
	I0318 13:09:19.527723    2404 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:09:19.660726    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710767359.663042224
	
	I0318 13:09:19.660726    2404 fix.go:216] guest clock: 1710767359.663042224
	I0318 13:09:19.660726    2404 fix.go:229] Guest: 2024-03-18 13:09:19.663042224 +0000 UTC Remote: 2024-03-18 13:09:15.1983195 +0000 UTC m=+90.304486701 (delta=4.464722724s)
	I0318 13:09:19.661279    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:09:21.723749    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:09:21.723749    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:21.723823    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:09:24.103065    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:09:24.103815    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:24.109309    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:09:24.110078    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.156 22 <nil> <nil>}
	I0318 13:09:24.110078    2404 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710767359
	I0318 13:09:24.254946    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 13:09:19 UTC 2024
	
	I0318 13:09:24.255017    2404 fix.go:236] clock set: Mon Mar 18 13:09:19 UTC 2024
	 (err=<nil>)
	I0318 13:09:24.255017    2404 start.go:83] releasing machines lock for "multinode-894400", held for 1m32.5558023s
	I0318 13:09:24.255302    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:09:26.237027    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:09:26.237027    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:26.237572    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:09:28.679757    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:09:28.680429    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:28.684468    2404 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:09:28.684546    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:09:28.694706    2404 ssh_runner.go:195] Run: cat /version.json
	I0318 13:09:28.694706    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:09:30.747990    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:09:30.748389    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:30.748511    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:09:30.768951    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:09:30.768951    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:30.769952    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:09:33.296966    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:09:33.297555    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:33.297555    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\id_rsa Username:docker}
	I0318 13:09:33.319431    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:09:33.319431    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:09:33.319431    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\id_rsa Username:docker}
	I0318 13:09:33.485899    2404 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0318 13:09:33.486822    2404 command_runner.go:130] > {"iso_version": "v1.32.1-1710520390-17991", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "3dd306d082737a9ddf335108b42c9fcb2ad84298"}
	I0318 13:09:33.486822    2404 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8023182s)
	I0318 13:09:33.486822    2404 ssh_runner.go:235] Completed: cat /version.json: (4.79208s)
	I0318 13:09:33.499053    2404 ssh_runner.go:195] Run: systemctl --version
	I0318 13:09:33.508368    2404 command_runner.go:130] > systemd 252 (252)
	I0318 13:09:33.508368    2404 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0318 13:09:33.521461    2404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 13:09:33.529752    2404 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0318 13:09:33.530515    2404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:09:33.541707    2404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:09:33.569039    2404 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0318 13:09:33.569193    2404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:09:33.569193    2404 start.go:494] detecting cgroup driver to use...
	I0318 13:09:33.569294    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:09:33.600253    2404 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0318 13:09:33.612155    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 13:09:33.644847    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 13:09:33.664004    2404 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 13:09:33.675047    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 13:09:33.704911    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 13:09:33.735406    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 13:09:33.765953    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 13:09:33.797558    2404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:09:33.831920    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 13:09:33.862692    2404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:09:33.878864    2404 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0318 13:09:33.890856    2404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:09:33.918379    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:09:34.088408    2404 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 13:09:34.118765    2404 start.go:494] detecting cgroup driver to use...
	I0318 13:09:34.129249    2404 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 13:09:34.150660    2404 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0318 13:09:34.150660    2404 command_runner.go:130] > [Unit]
	I0318 13:09:34.150748    2404 command_runner.go:130] > Description=Docker Application Container Engine
	I0318 13:09:34.150748    2404 command_runner.go:130] > Documentation=https://docs.docker.com
	I0318 13:09:34.150748    2404 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0318 13:09:34.150748    2404 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0318 13:09:34.150843    2404 command_runner.go:130] > StartLimitBurst=3
	I0318 13:09:34.150861    2404 command_runner.go:130] > StartLimitIntervalSec=60
	I0318 13:09:34.150861    2404 command_runner.go:130] > [Service]
	I0318 13:09:34.150861    2404 command_runner.go:130] > Type=notify
	I0318 13:09:34.150861    2404 command_runner.go:130] > Restart=on-failure
	I0318 13:09:34.150861    2404 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0318 13:09:34.150861    2404 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0318 13:09:34.150949    2404 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0318 13:09:34.150949    2404 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0318 13:09:34.151064    2404 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0318 13:09:34.151078    2404 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0318 13:09:34.151078    2404 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0318 13:09:34.151078    2404 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0318 13:09:34.151078    2404 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0318 13:09:34.151078    2404 command_runner.go:130] > ExecStart=
	I0318 13:09:34.151078    2404 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0318 13:09:34.151078    2404 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0318 13:09:34.151078    2404 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0318 13:09:34.151078    2404 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0318 13:09:34.151078    2404 command_runner.go:130] > LimitNOFILE=infinity
	I0318 13:09:34.151078    2404 command_runner.go:130] > LimitNPROC=infinity
	I0318 13:09:34.151078    2404 command_runner.go:130] > LimitCORE=infinity
	I0318 13:09:34.151078    2404 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0318 13:09:34.151078    2404 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0318 13:09:34.151078    2404 command_runner.go:130] > TasksMax=infinity
	I0318 13:09:34.151078    2404 command_runner.go:130] > TimeoutStartSec=0
	I0318 13:09:34.151078    2404 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0318 13:09:34.151078    2404 command_runner.go:130] > Delegate=yes
	I0318 13:09:34.151078    2404 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0318 13:09:34.151078    2404 command_runner.go:130] > KillMode=process
	I0318 13:09:34.151078    2404 command_runner.go:130] > [Install]
	I0318 13:09:34.151078    2404 command_runner.go:130] > WantedBy=multi-user.target
	I0318 13:09:34.163404    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:09:34.192329    2404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:09:34.225544    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:09:34.257628    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 13:09:34.292336    2404 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 13:09:34.351535    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 13:09:34.373313    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:09:34.402869    2404 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0318 13:09:34.415121    2404 ssh_runner.go:195] Run: which cri-dockerd
	I0318 13:09:34.420935    2404 command_runner.go:130] > /usr/bin/cri-dockerd
	I0318 13:09:34.434222    2404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 13:09:34.450633    2404 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 13:09:34.492842    2404 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 13:09:34.680182    2404 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 13:09:34.853219    2404 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 13:09:34.853219    2404 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 13:09:34.899827    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:09:35.095362    2404 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 13:09:37.686146    2404 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5907642s)
	I0318 13:09:37.698930    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 13:09:37.731408    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 13:09:37.766642    2404 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 13:09:37.952394    2404 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 13:09:38.130282    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:09:38.317159    2404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 13:09:38.357940    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 13:09:38.390672    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:09:38.584237    2404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 13:09:38.680542    2404 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 13:09:38.693517    2404 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 13:09:38.705824    2404 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0318 13:09:38.705824    2404 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0318 13:09:38.705824    2404 command_runner.go:130] > Device: 0,22	Inode: 859         Links: 1
	I0318 13:09:38.705824    2404 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0318 13:09:38.706363    2404 command_runner.go:130] > Access: 2024-03-18 13:09:38.608116193 +0000
	I0318 13:09:38.706363    2404 command_runner.go:130] > Modify: 2024-03-18 13:09:38.608116193 +0000
	I0318 13:09:38.706363    2404 command_runner.go:130] > Change: 2024-03-18 13:09:38.610116200 +0000
	I0318 13:09:38.706427    2404 command_runner.go:130] >  Birth: -
	I0318 13:09:38.706427    2404 start.go:562] Will wait 60s for crictl version
	I0318 13:09:38.719304    2404 ssh_runner.go:195] Run: which crictl
	I0318 13:09:38.724279    2404 command_runner.go:130] > /usr/bin/crictl
	I0318 13:09:38.736042    2404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:09:38.796088    2404 command_runner.go:130] > Version:  0.1.0
	I0318 13:09:38.796088    2404 command_runner.go:130] > RuntimeName:  docker
	I0318 13:09:38.796088    2404 command_runner.go:130] > RuntimeVersion:  25.0.4
	I0318 13:09:38.796088    2404 command_runner.go:130] > RuntimeApiVersion:  v1
	I0318 13:09:38.798457    2404 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 13:09:38.807618    2404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 13:09:38.837315    2404 command_runner.go:130] > 25.0.4
	I0318 13:09:38.846750    2404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 13:09:38.876873    2404 command_runner.go:130] > 25.0.4
	I0318 13:09:38.881279    2404 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 13:09:38.881468    2404 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 13:09:38.886270    2404 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 13:09:38.886270    2404 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 13:09:38.886270    2404 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 13:09:38.886270    2404 ip.go:207] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:93:df:b5 Flags:up|broadcast|multicast|running}
	I0318 13:09:38.889885    2404 ip.go:210] interface addr: fe80::572b:6b1d:9130:e88b/64
	I0318 13:09:38.889885    2404 ip.go:210] interface addr: 172.30.128.1/20
	I0318 13:09:38.901339    2404 ssh_runner.go:195] Run: grep 172.30.128.1	host.minikube.internal$ /etc/hosts
	I0318 13:09:38.907809    2404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:09:38.928952    2404 kubeadm.go:877] updating cluster {Name:multinode-894400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-894400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.130.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.30.140.66 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.30.137.140 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:09:38.929175    2404 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:09:38.937996    2404 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 13:09:38.962008    2404 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0318 13:09:38.962492    2404 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0318 13:09:38.962492    2404 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0318 13:09:38.962492    2404 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0318 13:09:38.962584    2404 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0318 13:09:38.962584    2404 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0318 13:09:38.962614    2404 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0318 13:09:38.962641    2404 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0318 13:09:38.962641    2404 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:09:38.962641    2404 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0318 13:09:38.962641    2404 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0318 13:09:38.962641    2404 docker.go:615] Images already preloaded, skipping extraction
	I0318 13:09:38.972565    2404 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 13:09:38.995417    2404 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0318 13:09:38.996332    2404 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0318 13:09:38.996332    2404 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0318 13:09:38.996332    2404 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0318 13:09:38.996332    2404 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0318 13:09:38.996332    2404 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0318 13:09:38.996332    2404 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0318 13:09:38.996446    2404 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0318 13:09:38.996446    2404 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:09:38.996479    2404 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0318 13:09:38.996506    2404 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0318 13:09:38.996506    2404 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:09:38.996506    2404 kubeadm.go:928] updating node { 172.30.130.156 8443 v1.28.4 docker true true} ...
	I0318 13:09:38.996506    2404 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-894400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.130.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-894400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:09:39.006126    2404 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 13:09:39.036444    2404 command_runner.go:130] > cgroupfs
	I0318 13:09:39.036444    2404 cni.go:84] Creating CNI manager for ""
	I0318 13:09:39.036444    2404 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 13:09:39.036444    2404 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:09:39.037881    2404 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.30.130.156 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-894400 NodeName:multinode-894400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.30.130.156"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.30.130.156 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:09:39.038146    2404 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.30.130.156
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-894400"
	  kubeletExtraArgs:
	    node-ip: 172.30.130.156
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.30.130.156"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:09:39.049182    2404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:09:39.066831    2404 command_runner.go:130] > kubeadm
	I0318 13:09:39.066831    2404 command_runner.go:130] > kubectl
	I0318 13:09:39.066831    2404 command_runner.go:130] > kubelet
	I0318 13:09:39.066831    2404 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:09:39.081918    2404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:09:39.097912    2404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0318 13:09:39.129972    2404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:09:39.156394    2404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0318 13:09:39.194539    2404 ssh_runner.go:195] Run: grep 172.30.130.156	control-plane.minikube.internal$ /etc/hosts
	I0318 13:09:39.205168    2404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.130.156	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:09:39.234862    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:09:39.407827    2404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:09:39.434793    2404 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400 for IP: 172.30.130.156
	I0318 13:09:39.434793    2404 certs.go:194] generating shared ca certs ...
	I0318 13:09:39.434793    2404 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:09:39.435718    2404 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0318 13:09:39.435718    2404 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0318 13:09:39.436544    2404 certs.go:256] generating profile certs ...
	I0318 13:09:39.437155    2404 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\client.key
	I0318 13:09:39.437437    2404 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.key.baaba412
	I0318 13:09:39.437598    2404 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.crt.baaba412 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.30.130.156]
	I0318 13:09:39.712914    2404 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.crt.baaba412 ...
	I0318 13:09:39.712914    2404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.crt.baaba412: {Name:mk86007a66db8875a8e76aadb0d07e30bab7a6f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:09:39.715897    2404 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.key.baaba412 ...
	I0318 13:09:39.715897    2404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.key.baaba412: {Name:mkc3cdba84d6ccf012b0c63dc9d3bfe98ff83392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:09:39.716272    2404 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.crt.baaba412 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.crt
	I0318 13:09:39.729467    2404 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.key.baaba412 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.key
	I0318 13:09:39.730466    2404 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\proxy-client.key
	I0318 13:09:39.730466    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 13:09:39.731241    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 13:09:39.731241    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 13:09:39.731241    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 13:09:39.731241    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 13:09:39.731883    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 13:09:39.732155    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 13:09:39.732315    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 13:09:39.733110    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem (1338 bytes)
	W0318 13:09:39.733482    2404 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424_empty.pem, impossibly tiny 0 bytes
	I0318 13:09:39.733555    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0318 13:09:39.733775    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 13:09:39.734124    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 13:09:39.734449    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 13:09:39.735016    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem (1708 bytes)
	I0318 13:09:39.735016    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem -> /usr/share/ca-certificates/13424.pem
	I0318 13:09:39.735658    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /usr/share/ca-certificates/134242.pem
	I0318 13:09:39.735658    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:09:39.737329    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:09:39.785980    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0318 13:09:39.826494    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:09:39.867125    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:09:39.914213    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 13:09:39.956243    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:09:39.998092    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:09:40.038883    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:09:40.089954    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem --> /usr/share/ca-certificates/13424.pem (1338 bytes)
	I0318 13:09:40.132642    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /usr/share/ca-certificates/134242.pem (1708 bytes)
	I0318 13:09:40.174629    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:09:40.215023    2404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:09:40.254395    2404 ssh_runner.go:195] Run: openssl version
	I0318 13:09:40.262410    2404 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0318 13:09:40.273120    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13424.pem && ln -fs /usr/share/ca-certificates/13424.pem /etc/ssl/certs/13424.pem"
	I0318 13:09:40.304194    2404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13424.pem
	I0318 13:09:40.311030    2404 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 18 11:25 /usr/share/ca-certificates/13424.pem
	I0318 13:09:40.311030    2404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 11:25 /usr/share/ca-certificates/13424.pem
	I0318 13:09:40.323636    2404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13424.pem
	I0318 13:09:40.331568    2404 command_runner.go:130] > 51391683
	I0318 13:09:40.342995    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13424.pem /etc/ssl/certs/51391683.0"
	I0318 13:09:40.371658    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134242.pem && ln -fs /usr/share/ca-certificates/134242.pem /etc/ssl/certs/134242.pem"
	I0318 13:09:40.401160    2404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134242.pem
	I0318 13:09:40.406834    2404 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 18 11:25 /usr/share/ca-certificates/134242.pem
	I0318 13:09:40.407481    2404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 11:25 /usr/share/ca-certificates/134242.pem
	I0318 13:09:40.418096    2404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134242.pem
	I0318 13:09:40.425794    2404 command_runner.go:130] > 3ec20f2e
	I0318 13:09:40.437248    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134242.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:09:40.466974    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:09:40.500473    2404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:09:40.507176    2404 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 18 11:07 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:09:40.507176    2404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 11:07 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:09:40.517840    2404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:09:40.526072    2404 command_runner.go:130] > b5213941
	I0318 13:09:40.537758    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:09:40.568154    2404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:09:40.575081    2404 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:09:40.575173    2404 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0318 13:09:40.575173    2404 command_runner.go:130] > Device: 8,1	Inode: 6289189     Links: 1
	I0318 13:09:40.575173    2404 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 13:09:40.575173    2404 command_runner.go:130] > Access: 2024-03-18 12:47:17.373043899 +0000
	I0318 13:09:40.575173    2404 command_runner.go:130] > Modify: 2024-03-18 12:47:17.373043899 +0000
	I0318 13:09:40.575173    2404 command_runner.go:130] > Change: 2024-03-18 12:47:17.373043899 +0000
	I0318 13:09:40.575294    2404 command_runner.go:130] >  Birth: 2024-03-18 12:47:17.373043899 +0000
	I0318 13:09:40.589504    2404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:09:40.597947    2404 command_runner.go:130] > Certificate will not expire
	I0318 13:09:40.608725    2404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:09:40.616313    2404 command_runner.go:130] > Certificate will not expire
	I0318 13:09:40.627502    2404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:09:40.638166    2404 command_runner.go:130] > Certificate will not expire
	I0318 13:09:40.649969    2404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:09:40.658562    2404 command_runner.go:130] > Certificate will not expire
	I0318 13:09:40.668689    2404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:09:40.676142    2404 command_runner.go:130] > Certificate will not expire
	I0318 13:09:40.686565    2404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:09:40.694068    2404 command_runner.go:130] > Certificate will not expire
	I0318 13:09:40.694457    2404 kubeadm.go:391] StartCluster: {Name:multinode-894400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:multinode-894400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.130.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.30.140.66 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.30.137.140 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:09:40.702669    2404 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 13:09:40.736605    2404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 13:09:40.753290    2404 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0318 13:09:40.753391    2404 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0318 13:09:40.753391    2404 command_runner.go:130] > /var/lib/minikube/etcd:
	I0318 13:09:40.753391    2404 command_runner.go:130] > member
	W0318 13:09:40.753391    2404 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:09:40.753391    2404 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:09:40.753391    2404 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:09:40.765410    2404 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:09:40.781543    2404 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:09:40.782812    2404 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-894400" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 13:09:40.783580    2404 kubeconfig.go:62] C:\Users\jenkins.minikube3\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-894400" cluster setting kubeconfig missing "multinode-894400" context setting]
	I0318 13:09:40.784322    2404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:09:40.797036    2404 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 13:09:40.798071    2404 kapi.go:59] client config for multinode-894400: &rest.Config{Host:"https://172.30.130.156:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-894400/client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-894400/client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e8b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 13:09:40.799221    2404 cert_rotation.go:137] Starting client certificate rotation controller
	I0318 13:09:40.811770    2404 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:09:40.828684    2404 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0318 13:09:40.828863    2404 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:09:40.828863    2404 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0318 13:09:40.828863    2404 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0318 13:09:40.828935    2404 command_runner.go:130] >  kind: InitConfiguration
	I0318 13:09:40.828935    2404 command_runner.go:130] >  localAPIEndpoint:
	I0318 13:09:40.828935    2404 command_runner.go:130] > -  advertiseAddress: 172.30.129.141
	I0318 13:09:40.828935    2404 command_runner.go:130] > +  advertiseAddress: 172.30.130.156
	I0318 13:09:40.828935    2404 command_runner.go:130] >    bindPort: 8443
	I0318 13:09:40.828935    2404 command_runner.go:130] >  bootstrapTokens:
	I0318 13:09:40.828935    2404 command_runner.go:130] >    - groups:
	I0318 13:09:40.828935    2404 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0318 13:09:40.829044    2404 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0318 13:09:40.829044    2404 command_runner.go:130] >    name: "multinode-894400"
	I0318 13:09:40.829044    2404 command_runner.go:130] >    kubeletExtraArgs:
	I0318 13:09:40.829101    2404 command_runner.go:130] > -    node-ip: 172.30.129.141
	I0318 13:09:40.829101    2404 command_runner.go:130] > +    node-ip: 172.30.130.156
	I0318 13:09:40.829101    2404 command_runner.go:130] >    taints: []
	I0318 13:09:40.829101    2404 command_runner.go:130] >  ---
	I0318 13:09:40.829151    2404 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0318 13:09:40.829151    2404 command_runner.go:130] >  kind: ClusterConfiguration
	I0318 13:09:40.829186    2404 command_runner.go:130] >  apiServer:
	I0318 13:09:40.829186    2404 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.30.129.141"]
	I0318 13:09:40.829186    2404 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.30.130.156"]
	I0318 13:09:40.829233    2404 command_runner.go:130] >    extraArgs:
	I0318 13:09:40.829267    2404 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0318 13:09:40.829314    2404 command_runner.go:130] >  controllerManager:
	I0318 13:09:40.829348    2404 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.30.129.141
	+  advertiseAddress: 172.30.130.156
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-894400"
	   kubeletExtraArgs:
	-    node-ip: 172.30.129.141
	+    node-ip: 172.30.130.156
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.30.129.141"]
	+  certSANs: ["127.0.0.1", "localhost", "172.30.130.156"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0318 13:09:40.829379    2404 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:09:40.837879    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 13:09:40.865233    2404 command_runner.go:130] > 693a64f7472f
	I0318 13:09:40.865265    2404 command_runner.go:130] > a2c499223090
	I0318 13:09:40.865265    2404 command_runner.go:130] > 265b39e386cf
	I0318 13:09:40.865265    2404 command_runner.go:130] > d001e299e996
	I0318 13:09:40.865315    2404 command_runner.go:130] > c4d7018ad23a
	I0318 13:09:40.865315    2404 command_runner.go:130] > 9335855aab63
	I0318 13:09:40.865348    2404 command_runner.go:130] > a47b1fb60692
	I0318 13:09:40.865348    2404 command_runner.go:130] > 60e9cd749c8f
	I0318 13:09:40.865348    2404 command_runner.go:130] > e4d42739ce0e
	I0318 13:09:40.865348    2404 command_runner.go:130] > 7aa5cf4ec378
	I0318 13:09:40.865381    2404 command_runner.go:130] > c51f768a2f64
	I0318 13:09:40.865381    2404 command_runner.go:130] > 56d1819beb10
	I0318 13:09:40.865381    2404 command_runner.go:130] > acffce2e7384
	I0318 13:09:40.865433    2404 command_runner.go:130] > 220884cbf1f5
	I0318 13:09:40.865433    2404 command_runner.go:130] > 82710777e700
	I0318 13:09:40.865433    2404 command_runner.go:130] > 5485f509825d
	I0318 13:09:40.865489    2404 docker.go:483] Stopping containers: [693a64f7472f a2c499223090 265b39e386cf d001e299e996 c4d7018ad23a 9335855aab63 a47b1fb60692 60e9cd749c8f e4d42739ce0e 7aa5cf4ec378 c51f768a2f64 56d1819beb10 acffce2e7384 220884cbf1f5 82710777e700 5485f509825d]
	I0318 13:09:40.874715    2404 ssh_runner.go:195] Run: docker stop 693a64f7472f a2c499223090 265b39e386cf d001e299e996 c4d7018ad23a 9335855aab63 a47b1fb60692 60e9cd749c8f e4d42739ce0e 7aa5cf4ec378 c51f768a2f64 56d1819beb10 acffce2e7384 220884cbf1f5 82710777e700 5485f509825d
	I0318 13:09:40.906786    2404 command_runner.go:130] > 693a64f7472f
	I0318 13:09:40.906843    2404 command_runner.go:130] > a2c499223090
	I0318 13:09:40.906843    2404 command_runner.go:130] > 265b39e386cf
	I0318 13:09:40.906843    2404 command_runner.go:130] > d001e299e996
	I0318 13:09:40.906843    2404 command_runner.go:130] > c4d7018ad23a
	I0318 13:09:40.906843    2404 command_runner.go:130] > 9335855aab63
	I0318 13:09:40.906843    2404 command_runner.go:130] > a47b1fb60692
	I0318 13:09:40.906843    2404 command_runner.go:130] > 60e9cd749c8f
	I0318 13:09:40.906843    2404 command_runner.go:130] > e4d42739ce0e
	I0318 13:09:40.906843    2404 command_runner.go:130] > 7aa5cf4ec378
	I0318 13:09:40.906843    2404 command_runner.go:130] > c51f768a2f64
	I0318 13:09:40.906962    2404 command_runner.go:130] > 56d1819beb10
	I0318 13:09:40.906962    2404 command_runner.go:130] > acffce2e7384
	I0318 13:09:40.906962    2404 command_runner.go:130] > 220884cbf1f5
	I0318 13:09:40.906962    2404 command_runner.go:130] > 82710777e700
	I0318 13:09:40.906962    2404 command_runner.go:130] > 5485f509825d
	I0318 13:09:40.918470    2404 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:09:40.954761    2404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:09:40.969765    2404 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0318 13:09:40.969765    2404 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0318 13:09:40.969765    2404 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0318 13:09:40.970064    2404 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:09:40.970358    2404 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:09:40.970387    2404 kubeadm.go:156] found existing configuration files:
	
	I0318 13:09:40.982037    2404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:09:40.996143    2404 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:09:40.996674    2404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:09:41.008077    2404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:09:41.036830    2404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:09:41.052186    2404 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:09:41.052329    2404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:09:41.063200    2404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:09:41.090707    2404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:09:41.104901    2404 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:09:41.105096    2404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:09:41.119051    2404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:09:41.145428    2404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:09:41.159784    2404 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:09:41.160275    2404 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:09:41.173163    2404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:09:41.199314    2404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:09:41.216043    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:09:41.587579    2404 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:09:41.587579    2404 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0318 13:09:41.587697    2404 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0318 13:09:41.587697    2404 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:09:41.587697    2404 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0318 13:09:41.587697    2404 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:09:41.587697    2404 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0318 13:09:41.587697    2404 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0318 13:09:41.587769    2404 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:09:41.587802    2404 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:09:41.587802    2404 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:09:41.587830    2404 command_runner.go:130] > [certs] Using the existing "sa" key
	I0318 13:09:41.587864    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:09:42.399989    2404 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:09:42.400066    2404 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:09:42.400066    2404 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:09:42.400066    2404 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:09:42.400066    2404 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:09:42.400162    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:09:42.491937    2404 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:09:42.495743    2404 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:09:42.495872    2404 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0318 13:09:42.691068    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:09:42.776989    2404 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:09:42.776989    2404 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:09:42.776989    2404 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:09:42.776989    2404 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:09:42.778174    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:09:42.894438    2404 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:09:42.894505    2404 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:09:42.907141    2404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:09:43.407151    2404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:09:43.919456    2404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:09:44.410020    2404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:09:44.917562    2404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:09:44.940970    2404 command_runner.go:130] > 1904
	I0318 13:09:44.940970    2404 api_server.go:72] duration metric: took 2.0464505s to wait for apiserver process to appear ...
	I0318 13:09:44.940970    2404 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:09:44.940970    2404 api_server.go:253] Checking apiserver healthz at https://172.30.130.156:8443/healthz ...
	I0318 13:09:48.315114    2404 api_server.go:279] https://172.30.130.156:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:09:48.316138    2404 api_server.go:103] status: https://172.30.130.156:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:09:48.316138    2404 api_server.go:253] Checking apiserver healthz at https://172.30.130.156:8443/healthz ...
	I0318 13:09:48.379129    2404 api_server.go:279] https://172.30.130.156:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:09:48.379129    2404 api_server.go:103] status: https://172.30.130.156:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:09:48.441960    2404 api_server.go:253] Checking apiserver healthz at https://172.30.130.156:8443/healthz ...
	I0318 13:09:48.453249    2404 api_server.go:279] https://172.30.130.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:09:48.453481    2404 api_server.go:103] status: https://172.30.130.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:09:48.946025    2404 api_server.go:253] Checking apiserver healthz at https://172.30.130.156:8443/healthz ...
	I0318 13:09:48.961136    2404 api_server.go:279] https://172.30.130.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:09:48.961270    2404 api_server.go:103] status: https://172.30.130.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:09:49.452368    2404 api_server.go:253] Checking apiserver healthz at https://172.30.130.156:8443/healthz ...
	I0318 13:09:49.468058    2404 api_server.go:279] https://172.30.130.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:09:49.468058    2404 api_server.go:103] status: https://172.30.130.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:09:49.948754    2404 api_server.go:253] Checking apiserver healthz at https://172.30.130.156:8443/healthz ...
	I0318 13:09:49.957404    2404 api_server.go:279] https://172.30.130.156:8443/healthz returned 200:
	ok
	I0318 13:09:49.958538    2404 round_trippers.go:463] GET https://172.30.130.156:8443/version
	I0318 13:09:49.958538    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:49.958538    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:49.958538    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:49.971073    2404 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0318 13:09:49.971538    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:49.971538    2404 round_trippers.go:580]     Audit-Id: 909294db-d475-46ea-ac0b-105fe01fe502
	I0318 13:09:49.971538    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:49.971538    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:49.971538    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:49.971538    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:49.971538    2404 round_trippers.go:580]     Content-Length: 264
	I0318 13:09:49.971637    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:49 GMT
	I0318 13:09:49.971637    2404 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0318 13:09:49.971810    2404 api_server.go:141] control plane version: v1.28.4
	I0318 13:09:49.971889    2404 api_server.go:131] duration metric: took 5.0308812s to wait for apiserver health ...
	I0318 13:09:49.971889    2404 cni.go:84] Creating CNI manager for ""
	I0318 13:09:49.971889    2404 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 13:09:49.974549    2404 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0318 13:09:49.987939    2404 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0318 13:09:49.998819    2404 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0318 13:09:49.998819    2404 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0318 13:09:49.998819    2404 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0318 13:09:49.999102    2404 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 13:09:49.999102    2404 command_runner.go:130] > Access: 2024-03-18 13:08:20.432721600 +0000
	I0318 13:09:49.999102    2404 command_runner.go:130] > Modify: 2024-03-15 22:00:10.000000000 +0000
	I0318 13:09:49.999102    2404 command_runner.go:130] > Change: 2024-03-18 13:08:11.982000000 +0000
	I0318 13:09:49.999175    2404 command_runner.go:130] >  Birth: -
	I0318 13:09:49.999805    2404 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0318 13:09:49.999838    2404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0318 13:09:50.075127    2404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0318 13:09:51.626302    2404 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0318 13:09:51.626557    2404 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0318 13:09:51.626557    2404 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0318 13:09:51.626557    2404 command_runner.go:130] > daemonset.apps/kindnet configured
	I0318 13:09:51.626557    2404 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.5514184s)
	I0318 13:09:51.626669    2404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:09:51.626821    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods
	I0318 13:09:51.626821    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:51.626945    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:51.626945    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:51.636114    2404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 13:09:51.636114    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:51.636114    2404 round_trippers.go:580]     Audit-Id: 14560811-bdec-495b-b52d-00404611f8d9
	I0318 13:09:51.636114    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:51.636114    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:51.636114    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:51.636114    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:51.636114    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:51 GMT
	I0318 13:09:51.638378    2404 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1801"},"items":[{"metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83629 chars]
	I0318 13:09:51.644560    2404 system_pods.go:59] 12 kube-system pods found
	I0318 13:09:51.644560    2404 system_pods.go:61] "coredns-5dd5756b68-456tm" [1a018c55-846b-4dc2-992c-dc8fd82a6c67] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 13:09:51.644560    2404 system_pods.go:61] "etcd-multinode-894400" [d4c040b9-a604-4a0d-80ee-7436541af60c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 13:09:51.644560    2404 system_pods.go:61] "kindnet-hhsxh" [0161d239-2d85-4246-b2fa-6c7374f2ecd6] Running
	I0318 13:09:51.644560    2404 system_pods.go:61] "kindnet-k5lpg" [c5e4099b-0611-4ebd-a7a5-ecdbeb168c5b] Running
	I0318 13:09:51.644560    2404 system_pods.go:61] "kindnet-zv9tv" [c4d70517-d7fb-4344-b2a4-20e40c13ab53] Running
	I0318 13:09:51.644560    2404 system_pods.go:61] "kube-apiserver-multinode-894400" [46152b8e-0bda-427e-a1ad-c79506b56763] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 13:09:51.644560    2404 system_pods.go:61] "kube-controller-manager-multinode-894400" [4ad5fc15-53ba-4ebb-9a63-b8572cd9c834] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 13:09:51.644560    2404 system_pods.go:61] "kube-proxy-745w9" [d385fe06-f516-440d-b9ed-37c2d4a81050] Running
	I0318 13:09:51.644560    2404 system_pods.go:61] "kube-proxy-8bdmn" [5c266b8a-9665-4365-93c6-2b5f1699d3ef] Running
	I0318 13:09:51.644560    2404 system_pods.go:61] "kube-proxy-mc5tv" [0afe25f8-cbd6-412b-8698-7b547d1d49ca] Running
	I0318 13:09:51.644560    2404 system_pods.go:61] "kube-scheduler-multinode-894400" [f47703ce-5a82-466e-ac8e-ef6b8cc07e6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 13:09:51.644560    2404 system_pods.go:61] "storage-provisioner" [219bafbc-d807-44cf-9927-e4957f36ad70] Running
	I0318 13:09:51.645164    2404 system_pods.go:74] duration metric: took 18.4401ms to wait for pod list to return data ...
	I0318 13:09:51.645206    2404 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:09:51.645324    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes
	I0318 13:09:51.645324    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:51.645324    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:51.645324    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:51.651156    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:09:51.651156    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:51.651156    2404 round_trippers.go:580]     Audit-Id: 3720ddc5-c5a7-4693-b0c4-4b7816c55ad5
	I0318 13:09:51.651156    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:51.651156    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:51.651156    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:51.651156    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:51.651156    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:51 GMT
	I0318 13:09:51.651886    2404 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1801"},"items":[{"metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15629 chars]
	I0318 13:09:51.653418    2404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:09:51.653418    2404 node_conditions.go:123] node cpu capacity is 2
	I0318 13:09:51.653418    2404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:09:51.653418    2404 node_conditions.go:123] node cpu capacity is 2
	I0318 13:09:51.653418    2404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:09:51.653418    2404 node_conditions.go:123] node cpu capacity is 2
	I0318 13:09:51.653418    2404 node_conditions.go:105] duration metric: took 8.2126ms to run NodePressure ...
	I0318 13:09:51.653418    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:09:51.953031    2404 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0318 13:09:51.953084    2404 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0318 13:09:51.953084    2404 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 13:09:51.953351    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0318 13:09:51.953351    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:51.953385    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:51.953385    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:51.962942    2404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 13:09:51.962942    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:51.963346    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:51.963346    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:51 GMT
	I0318 13:09:51.963346    2404 round_trippers.go:580]     Audit-Id: 10c97923-7e05-4a16-ad4d-a9fd9e82f478
	I0318 13:09:51.963346    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:51.963346    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:51.963346    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:51.964081    2404 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1803"},"items":[{"metadata":{"name":"etcd-multinode-894400","namespace":"kube-system","uid":"d4c040b9-a604-4a0d-80ee-7436541af60c","resourceVersion":"1778","creationTimestamp":"2024-03-18T13:09:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.130.156:2379","kubernetes.io/config.hash":"743a549b698f93b8586a236f83c90556","kubernetes.io/config.mirror":"743a549b698f93b8586a236f83c90556","kubernetes.io/config.seen":"2024-03-18T13:09:42.924670260Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:09:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 29377 chars]
	I0318 13:09:51.965066    2404 kubeadm.go:733] kubelet initialised
	I0318 13:09:51.965066    2404 kubeadm.go:734] duration metric: took 11.9821ms waiting for restarted kubelet to initialise ...
	I0318 13:09:51.965066    2404 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:09:51.965636    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods
	I0318 13:09:51.965636    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:51.965636    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:51.965636    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:51.979595    2404 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0318 13:09:51.979595    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:51.979595    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:51.980105    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:51.980105    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:51.980105    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:51 GMT
	I0318 13:09:51.980105    2404 round_trippers.go:580]     Audit-Id: b5b4f7cb-3aff-429d-938f-c784e7c38705
	I0318 13:09:51.980105    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:51.982766    2404 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1803"},"items":[{"metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83629 chars]
	I0318 13:09:51.986090    2404 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-456tm" in "kube-system" namespace to be "Ready" ...
	I0318 13:09:51.986261    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:09:51.986357    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:51.986357    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:51.986357    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:51.989285    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:09:51.989285    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:51.989285    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:51.989285    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:51 GMT
	I0318 13:09:51.989500    2404 round_trippers.go:580]     Audit-Id: 36e7a0ea-ab36-41bd-b5e4-40ebd0d17c9d
	I0318 13:09:51.989500    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:51.989500    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:51.989500    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:51.989569    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:09:51.990252    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:51.990294    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:51.990294    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:51.990294    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:51.995108    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:09:51.995108    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:51.995108    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:51.995108    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:51.995108    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:51 GMT
	I0318 13:09:51.995108    2404 round_trippers.go:580]     Audit-Id: fedfcf81-2f68-44d4-a382-43a607a15370
	I0318 13:09:51.995108    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:51.995108    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:51.995108    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:51.995781    2404 pod_ready.go:97] node "multinode-894400" hosting pod "coredns-5dd5756b68-456tm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:51.995781    2404 pod_ready.go:81] duration metric: took 9.627ms for pod "coredns-5dd5756b68-456tm" in "kube-system" namespace to be "Ready" ...
	E0318 13:09:51.995781    2404 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-894400" hosting pod "coredns-5dd5756b68-456tm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:51.995781    2404 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:09:51.995781    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-894400
	I0318 13:09:51.995781    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:51.995781    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:51.995781    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:51.999185    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:51.999185    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:51.999185    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:51.999185    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:51.999185    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:52 GMT
	I0318 13:09:51.999185    2404 round_trippers.go:580]     Audit-Id: 60f45dc3-1276-4380-bfec-33039f4ee137
	I0318 13:09:51.999185    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:51.999185    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:51.999185    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-894400","namespace":"kube-system","uid":"d4c040b9-a604-4a0d-80ee-7436541af60c","resourceVersion":"1778","creationTimestamp":"2024-03-18T13:09:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.130.156:2379","kubernetes.io/config.hash":"743a549b698f93b8586a236f83c90556","kubernetes.io/config.mirror":"743a549b698f93b8586a236f83c90556","kubernetes.io/config.seen":"2024-03-18T13:09:42.924670260Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:09:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6097 chars]
	I0318 13:09:52.000209    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:52.000280    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:52.000280    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:52.000280    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:52.002530    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:09:52.002530    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:52.002530    2404 round_trippers.go:580]     Audit-Id: 169a4677-aa49-446f-be23-83eb4a404cb3
	I0318 13:09:52.002530    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:52.002530    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:52.002530    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:52.002530    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:52.002530    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:52 GMT
	I0318 13:09:52.002530    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:52.003544    2404 pod_ready.go:97] node "multinode-894400" hosting pod "etcd-multinode-894400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:52.003579    2404 pod_ready.go:81] duration metric: took 7.798ms for pod "etcd-multinode-894400" in "kube-system" namespace to be "Ready" ...
	E0318 13:09:52.003579    2404 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-894400" hosting pod "etcd-multinode-894400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:52.003579    2404 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:09:52.003579    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-894400
	I0318 13:09:52.003579    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:52.003579    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:52.003579    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:52.006211    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:09:52.006211    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:52.006211    2404 round_trippers.go:580]     Audit-Id: efc7c5ce-bcac-4590-a713-06f4c48aeb81
	I0318 13:09:52.006211    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:52.006211    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:52.006211    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:52.006533    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:52.006533    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:52 GMT
	I0318 13:09:52.006661    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-894400","namespace":"kube-system","uid":"46152b8e-0bda-427e-a1ad-c79506b56763","resourceVersion":"1775","creationTimestamp":"2024-03-18T13:09:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.30.130.156:8443","kubernetes.io/config.hash":"6096c2227c4230453f65f86ebdcd0d95","kubernetes.io/config.mirror":"6096c2227c4230453f65f86ebdcd0d95","kubernetes.io/config.seen":"2024-03-18T13:09:42.869643374Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:09:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7653 chars]
	I0318 13:09:52.007342    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:52.007342    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:52.007342    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:52.007342    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:52.009955    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:09:52.009955    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:52.009955    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:52 GMT
	I0318 13:09:52.009955    2404 round_trippers.go:580]     Audit-Id: 3021a660-98d6-406f-8abd-d699fbe437e8
	I0318 13:09:52.009955    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:52.010287    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:52.010324    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:52.010324    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:52.010554    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:52.011401    2404 pod_ready.go:97] node "multinode-894400" hosting pod "kube-apiserver-multinode-894400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:52.011401    2404 pod_ready.go:81] duration metric: took 7.8216ms for pod "kube-apiserver-multinode-894400" in "kube-system" namespace to be "Ready" ...
	E0318 13:09:52.011463    2404 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-894400" hosting pod "kube-apiserver-multinode-894400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:52.011463    2404 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:09:52.011559    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-894400
	I0318 13:09:52.011626    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:52.011626    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:52.011626    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:52.014886    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:52.014886    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:52.015186    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:52 GMT
	I0318 13:09:52.015186    2404 round_trippers.go:580]     Audit-Id: b0323efd-79ee-41e6-93b8-0f6fd8d7e8ce
	I0318 13:09:52.015186    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:52.015186    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:52.015186    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:52.015186    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:52.015565    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-894400","namespace":"kube-system","uid":"4ad5fc15-53ba-4ebb-9a63-b8572cd9c834","resourceVersion":"1772","creationTimestamp":"2024-03-18T12:47:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d340aced56ba169ecac1e3ac58ad57fe","kubernetes.io/config.mirror":"d340aced56ba169ecac1e3ac58ad57fe","kubernetes.io/config.seen":"2024-03-18T12:47:20.228444892Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7441 chars]
	I0318 13:09:52.034079    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:52.034079    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:52.034079    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:52.034079    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:52.036486    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:09:52.036486    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:52.036968    2404 round_trippers.go:580]     Audit-Id: 1439daca-15de-475c-8bd7-da33d464d264
	I0318 13:09:52.036968    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:52.036968    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:52.036968    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:52.036968    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:52.036968    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:52 GMT
	I0318 13:09:52.037264    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:52.037639    2404 pod_ready.go:97] node "multinode-894400" hosting pod "kube-controller-manager-multinode-894400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:52.037639    2404 pod_ready.go:81] duration metric: took 26.1472ms for pod "kube-controller-manager-multinode-894400" in "kube-system" namespace to be "Ready" ...
	E0318 13:09:52.037726    2404 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-894400" hosting pod "kube-controller-manager-multinode-894400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:52.037726    2404 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-745w9" in "kube-system" namespace to be "Ready" ...
	I0318 13:09:52.237501    2404 request.go:629] Waited for 199.3292ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-745w9
	I0318 13:09:52.237501    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-745w9
	I0318 13:09:52.237501    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:52.237501    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:52.237501    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:52.243333    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:09:52.244019    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:52.244019    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:52.244019    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:52.244019    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:52 GMT
	I0318 13:09:52.244019    2404 round_trippers.go:580]     Audit-Id: d0c5eefd-e08b-4fc4-8350-84f6943c6e05
	I0318 13:09:52.244019    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:52.244019    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:52.244263    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-745w9","generateName":"kube-proxy-","namespace":"kube-system","uid":"d385fe06-f516-440d-b9ed-37c2d4a81050","resourceVersion":"1698","creationTimestamp":"2024-03-18T12:55:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"da714af0-88aa-4fc4-b73d-4de837f06114","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da714af0-88aa-4fc4-b73d-4de837f06114\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5771 chars]
	I0318 13:09:52.442059    2404 request.go:629] Waited for 196.9732ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m03
	I0318 13:09:52.442318    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m03
	I0318 13:09:52.442318    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:52.442318    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:52.442318    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:52.446437    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:09:52.446589    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:52.446589    2404 round_trippers.go:580]     Audit-Id: 2f288a64-d22d-4034-990c-5ba48f96f3ff
	I0318 13:09:52.446589    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:52.446589    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:52.446589    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:52.446589    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:52.446589    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:52 GMT
	I0318 13:09:52.446589    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m03","uid":"1f8e594e-d4cc-4247-8064-01ac67ea2b15","resourceVersion":"1707","creationTimestamp":"2024-03-18T13:05:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_05_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:05:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4400 chars]
	I0318 13:09:52.447429    2404 pod_ready.go:97] node "multinode-894400-m03" hosting pod "kube-proxy-745w9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400-m03" has status "Ready":"Unknown"
	I0318 13:09:52.447481    2404 pod_ready.go:81] duration metric: took 409.752ms for pod "kube-proxy-745w9" in "kube-system" namespace to be "Ready" ...
	E0318 13:09:52.447550    2404 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-894400-m03" hosting pod "kube-proxy-745w9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400-m03" has status "Ready":"Unknown"
	I0318 13:09:52.447550    2404 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8bdmn" in "kube-system" namespace to be "Ready" ...
	I0318 13:09:52.626849    2404 request.go:629] Waited for 179.2984ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8bdmn
	I0318 13:09:52.627152    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8bdmn
	I0318 13:09:52.627152    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:52.627508    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:52.627508    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:52.630294    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:09:52.631130    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:52.631130    2404 round_trippers.go:580]     Audit-Id: 212cc27c-2521-4ddf-aeef-4b8764d88083
	I0318 13:09:52.631130    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:52.631130    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:52.631130    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:52.631130    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:52.631130    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:52 GMT
	I0318 13:09:52.631201    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8bdmn","generateName":"kube-proxy-","namespace":"kube-system","uid":"5c266b8a-9665-4365-93c6-2b5f1699d3ef","resourceVersion":"616","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"da714af0-88aa-4fc4-b73d-4de837f06114","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da714af0-88aa-4fc4-b73d-4de837f06114\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0318 13:09:52.831471    2404 request.go:629] Waited for 199.1963ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:09:52.831678    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:09:52.831874    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:52.831933    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:52.832012    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:52.835679    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:52.835679    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:52.835679    2404 round_trippers.go:580]     Audit-Id: 461cb120-a61d-4461-ac50-b06be9a427b6
	I0318 13:09:52.835679    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:52.835679    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:52.835679    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:52.835679    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:52.835679    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:52 GMT
	I0318 13:09:52.835679    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"1345","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3826 chars]
	I0318 13:09:52.835679    2404 pod_ready.go:92] pod "kube-proxy-8bdmn" in "kube-system" namespace has status "Ready":"True"
	I0318 13:09:52.835679    2404 pod_ready.go:81] duration metric: took 388.1265ms for pod "kube-proxy-8bdmn" in "kube-system" namespace to be "Ready" ...
	I0318 13:09:52.835679    2404 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mc5tv" in "kube-system" namespace to be "Ready" ...
	I0318 13:09:53.035775    2404 request.go:629] Waited for 199.9712ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mc5tv
	I0318 13:09:53.035829    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mc5tv
	I0318 13:09:53.035829    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:53.035829    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:53.035829    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:53.038462    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:09:53.039188    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:53.039188    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:53.039188    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:53.039188    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:53.039188    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:53.039188    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:53 GMT
	I0318 13:09:53.039188    2404 round_trippers.go:580]     Audit-Id: 3ca6caa1-0df5-409b-ad7c-0bd40aeac4c3
	I0318 13:09:53.039535    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mc5tv","generateName":"kube-proxy-","namespace":"kube-system","uid":"0afe25f8-cbd6-412b-8698-7b547d1d49ca","resourceVersion":"1799","creationTimestamp":"2024-03-18T12:47:41Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"da714af0-88aa-4fc4-b73d-4de837f06114","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da714af0-88aa-4fc4-b73d-4de837f06114\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0318 13:09:53.242411    2404 request.go:629] Waited for 202.0444ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:53.242523    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:53.242523    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:53.242587    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:53.242587    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:53.246360    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:53.246360    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:53.246360    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:53.246360    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:53.246360    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:53.246360    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:53 GMT
	I0318 13:09:53.246963    2404 round_trippers.go:580]     Audit-Id: 05b6cf01-5d1b-4731-89b2-4d5b223cc296
	I0318 13:09:53.246963    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:53.246963    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:53.247605    2404 pod_ready.go:97] node "multinode-894400" hosting pod "kube-proxy-mc5tv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:53.247702    2404 pod_ready.go:81] duration metric: took 412.0205ms for pod "kube-proxy-mc5tv" in "kube-system" namespace to be "Ready" ...
	E0318 13:09:53.247759    2404 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-894400" hosting pod "kube-proxy-mc5tv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:53.247759    2404 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:09:53.429789    2404 request.go:629] Waited for 181.7398ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-894400
	I0318 13:09:53.429968    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-894400
	I0318 13:09:53.429968    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:53.429968    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:53.429968    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:53.433783    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:53.433783    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:53.433783    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:53.434638    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:53 GMT
	I0318 13:09:53.434638    2404 round_trippers.go:580]     Audit-Id: 4a82783f-c202-4a24-ad98-cdf12d30e15d
	I0318 13:09:53.434638    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:53.434638    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:53.434638    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:53.434778    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-894400","namespace":"kube-system","uid":"f47703ce-5a82-466e-ac8e-ef6b8cc07e6c","resourceVersion":"1762","creationTimestamp":"2024-03-18T12:47:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1c745e9b917877b1ff3c90ed02e9a79a","kubernetes.io/config.mirror":"1c745e9b917877b1ff3c90ed02e9a79a","kubernetes.io/config.seen":"2024-03-18T12:47:28.428225123Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5153 chars]
	I0318 13:09:53.632095    2404 request.go:629] Waited for 196.9063ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:53.632095    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:53.632095    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:53.632095    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:53.632095    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:53.635683    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:53.635683    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:53.636014    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:53.636014    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:53.636014    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:53.636014    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:53.636014    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:53 GMT
	I0318 13:09:53.636014    2404 round_trippers.go:580]     Audit-Id: 8c780d1b-63c9-4028-86ac-c68659dde07d
	I0318 13:09:53.636695    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:53.637093    2404 pod_ready.go:97] node "multinode-894400" hosting pod "kube-scheduler-multinode-894400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:53.637093    2404 pod_ready.go:81] duration metric: took 389.2836ms for pod "kube-scheduler-multinode-894400" in "kube-system" namespace to be "Ready" ...
	E0318 13:09:53.637093    2404 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-894400" hosting pod "kube-scheduler-multinode-894400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400" has status "Ready":"False"
	I0318 13:09:53.637093    2404 pod_ready.go:38] duration metric: took 1.6720145s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:09:53.637093    2404 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:09:53.653850    2404 command_runner.go:130] > -16
	I0318 13:09:53.654305    2404 ops.go:34] apiserver oom_adj: -16
	I0318 13:09:53.654305    2404 kubeadm.go:591] duration metric: took 12.9008183s to restartPrimaryControlPlane
	I0318 13:09:53.654361    2404 kubeadm.go:393] duration metric: took 12.9598071s to StartCluster
	I0318 13:09:53.654361    2404 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:09:53.654652    2404 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 13:09:53.656267    2404 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:09:53.658182    2404 start.go:234] Will wait 6m0s for node &{Name: IP:172.30.130.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 13:09:53.662337    2404 out.go:177] * Verifying Kubernetes components...
	I0318 13:09:53.658182    2404 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:09:53.658767    2404 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:09:53.668540    2404 out.go:177] * Enabled addons: 
	I0318 13:09:53.670987    2404 addons.go:505] duration metric: took 12.2709ms for enable addons: enabled=[]
	I0318 13:09:53.675869    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:09:53.917393    2404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:09:53.944942    2404 node_ready.go:35] waiting up to 6m0s for node "multinode-894400" to be "Ready" ...
	I0318 13:09:53.945183    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:53.945183    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:53.945183    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:53.945183    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:53.948355    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:53.949263    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:53.949263    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:53.949263    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:53.949263    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:53.949263    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:53.949263    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:53 GMT
	I0318 13:09:53.949263    2404 round_trippers.go:580]     Audit-Id: 730ea6ed-2dd9-4904-97c5-43257ad5bf32
	I0318 13:09:53.949576    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:54.459279    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:54.459279    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:54.459279    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:54.459279    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:54.462970    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:54.462970    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:54.462970    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:54.463433    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:54 GMT
	I0318 13:09:54.463433    2404 round_trippers.go:580]     Audit-Id: e4c5d98c-7113-44ae-9b32-b652fddb7cdd
	I0318 13:09:54.463433    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:54.463433    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:54.463433    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:54.463817    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:54.957589    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:54.957675    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:54.957675    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:54.957675    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:54.961749    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:54.961840    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:54.961840    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:54.961840    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:54 GMT
	I0318 13:09:54.961840    2404 round_trippers.go:580]     Audit-Id: dd96c3c8-8018-405c-9213-cd33d6cfa45f
	I0318 13:09:54.961840    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:54.961840    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:54.961840    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:54.962365    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:55.448533    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:55.448533    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:55.448533    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:55.448533    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:55.452957    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:55.452957    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:55.452957    2404 round_trippers.go:580]     Audit-Id: 41faac3d-1074-482f-a0aa-61c5518b32e2
	I0318 13:09:55.452957    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:55.453052    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:55.453052    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:55.453052    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:55.453052    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:55 GMT
	I0318 13:09:55.453363    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:55.948537    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:55.948537    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:55.948537    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:55.948537    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:55.952848    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:09:55.952848    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:55.952848    2404 round_trippers.go:580]     Audit-Id: cfee6150-59e1-468e-8834-4cda3fecae10
	I0318 13:09:55.952848    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:55.952848    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:55.952848    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:55.952848    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:55.952848    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:55 GMT
	I0318 13:09:55.952848    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:55.953693    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:09:56.449067    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:56.449067    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:56.449067    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:56.449067    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:56.454121    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:09:56.454369    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:56.454369    2404 round_trippers.go:580]     Audit-Id: 36a881ef-1c40-4d41-a192-f78c8e7a744f
	I0318 13:09:56.454369    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:56.454369    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:56.454369    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:56.454369    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:56.454369    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:56 GMT
	I0318 13:09:56.454733    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:56.946172    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:56.946224    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:56.946224    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:56.946224    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:56.949556    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:56.949556    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:56.949556    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:56.949556    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:56.949556    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:56.949556    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:56 GMT
	I0318 13:09:56.949556    2404 round_trippers.go:580]     Audit-Id: 0f94ae2c-c9f6-4e3f-b64a-883588570d78
	I0318 13:09:56.949556    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:56.949556    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:57.445948    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:57.446019    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:57.446019    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:57.446019    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:57.448877    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:09:57.449760    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:57.449760    2404 round_trippers.go:580]     Audit-Id: cf5d93c7-048b-4248-8629-0f2c0a325eb2
	I0318 13:09:57.449760    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:57.449760    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:57.449760    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:57.449760    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:57.449760    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:57 GMT
	I0318 13:09:57.450155    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:57.960312    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:57.960371    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:57.960470    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:57.960470    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:57.965245    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:09:57.965348    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:57.965348    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:57 GMT
	I0318 13:09:57.965348    2404 round_trippers.go:580]     Audit-Id: e53e6ff0-47a4-479e-afe8-fcefb4b6f1bb
	I0318 13:09:57.965403    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:57.965426    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:57.965426    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:57.965426    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:57.965453    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:57.966166    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:09:58.456057    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:58.456301    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:58.456301    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:58.456301    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:58.459666    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:58.460698    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:58.460698    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:58.460698    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:58.460698    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:58 GMT
	I0318 13:09:58.460698    2404 round_trippers.go:580]     Audit-Id: 9e05a9ca-c19f-4c98-84cb-22ed5d045845
	I0318 13:09:58.460698    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:58.460765    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:58.461005    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:58.956036    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:58.956036    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:58.956135    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:58.956135    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:58.960461    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:09:58.960461    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:58.960461    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:58.960461    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:58.960461    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:58.960461    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:58 GMT
	I0318 13:09:58.960461    2404 round_trippers.go:580]     Audit-Id: 278d09c1-e53b-4c49-993b-4d51df97fe40
	I0318 13:09:58.961032    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:58.961111    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:59.457416    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:59.457487    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:59.457555    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:59.457555    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:59.461185    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:59.461678    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:59.461678    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:59.461678    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:59.461678    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:59.461678    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:59 GMT
	I0318 13:09:59.461678    2404 round_trippers.go:580]     Audit-Id: 6daa3fe1-eb47-45ae-acb9-10389e2a34b3
	I0318 13:09:59.461678    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:59.461844    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:09:59.956899    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:09:59.956899    2404 round_trippers.go:469] Request Headers:
	I0318 13:09:59.956899    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:09:59.956899    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:09:59.960722    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:09:59.960722    2404 round_trippers.go:577] Response Headers:
	I0318 13:09:59.960722    2404 round_trippers.go:580]     Audit-Id: 422e2362-f72c-4660-b021-2eaa3a47f678
	I0318 13:09:59.960722    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:09:59.960722    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:09:59.960722    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:09:59.960722    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:09:59.960722    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:09:59 GMT
	I0318 13:09:59.961013    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:10:00.459419    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:00.459419    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:00.459419    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:00.459419    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:00.466744    2404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 13:10:00.466744    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:00.466951    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:00 GMT
	I0318 13:10:00.466951    2404 round_trippers.go:580]     Audit-Id: 625bb4a4-36de-4f6d-ba2c-bf51940a57ee
	I0318 13:10:00.466951    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:00.466951    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:00.466951    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:00.466951    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:00.467048    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:10:00.467931    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:00.947111    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:00.947111    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:00.947111    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:00.947111    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:00.949726    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:00.949726    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:00.949726    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:00.949726    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:00.949726    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:00.949726    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:00 GMT
	I0318 13:10:00.950735    2404 round_trippers.go:580]     Audit-Id: 5ae0bb24-dcf0-4349-833a-aee964afb79b
	I0318 13:10:00.950735    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:00.950879    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:10:01.449177    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:01.449177    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:01.449177    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:01.449177    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:01.461940    2404 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0318 13:10:01.461940    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:01.461940    2404 round_trippers.go:580]     Audit-Id: 8eec4909-b23a-45ed-b783-5a17189584c0
	I0318 13:10:01.461940    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:01.461940    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:01.461940    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:01.461940    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:01.461940    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:01 GMT
	I0318 13:10:01.465449    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1721","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0318 13:10:01.954769    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:01.954769    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:01.954769    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:01.954769    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:01.958728    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:01.958728    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:01.958728    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:01.958728    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:01.958728    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:01.958728    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:01 GMT
	I0318 13:10:01.959034    2404 round_trippers.go:580]     Audit-Id: fbd9a1c1-43ac-483a-86b7-88f59894dc81
	I0318 13:10:01.959235    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:01.959406    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:02.455228    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:02.455228    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:02.455228    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:02.455228    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:02.459004    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:02.459326    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:02.459326    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:02.459326    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:02.459326    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:02.459326    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:02 GMT
	I0318 13:10:02.459326    2404 round_trippers.go:580]     Audit-Id: ae3d41d2-4301-45b5-96d0-f209fe01566c
	I0318 13:10:02.459326    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:02.459692    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:02.954675    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:02.954675    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:02.954675    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:02.954675    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:02.958292    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:02.958292    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:02.958862    2404 round_trippers.go:580]     Audit-Id: 1f7769ac-4389-4f2d-a428-896ff93c16ba
	I0318 13:10:02.958862    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:02.958862    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:02.958862    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:02.958862    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:02.958862    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:02 GMT
	I0318 13:10:02.959027    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:02.959607    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:03.459007    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:03.459007    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:03.459007    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:03.459007    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:03.462612    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:03.462931    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:03.462931    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:03.462931    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:03 GMT
	I0318 13:10:03.462931    2404 round_trippers.go:580]     Audit-Id: 6423ab1d-6abb-4838-b948-a7cab1155a51
	I0318 13:10:03.462931    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:03.462931    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:03.462931    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:03.463390    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:03.960075    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:03.960075    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:03.960075    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:03.960075    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:03.964274    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:03.965290    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:03.965290    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:03 GMT
	I0318 13:10:03.965369    2404 round_trippers.go:580]     Audit-Id: 290ec6cf-bbe7-486f-8470-c2a3cf3fcfcb
	I0318 13:10:03.965369    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:03.965441    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:03.965441    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:03.965441    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:03.965726    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:04.446219    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:04.446219    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:04.446219    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:04.446219    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:04.451010    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:04.451460    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:04.451460    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:04.451460    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:04.451460    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:04.451460    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:04 GMT
	I0318 13:10:04.451460    2404 round_trippers.go:580]     Audit-Id: 99b92c92-ea08-45f3-8925-36013b9cc552
	I0318 13:10:04.451460    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:04.451774    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:04.957914    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:04.957914    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:04.957914    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:04.957914    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:04.961502    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:04.961502    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:04.961502    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:04 GMT
	I0318 13:10:04.962357    2404 round_trippers.go:580]     Audit-Id: 3a6828d5-e4c8-4395-b591-40569713e8a4
	I0318 13:10:04.962357    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:04.962357    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:04.962357    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:04.962357    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:04.962585    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:04.963071    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:05.445608    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:05.445670    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:05.445670    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:05.445670    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:05.450095    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:05.450095    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:05.450095    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:05.450095    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:05.450095    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:05 GMT
	I0318 13:10:05.450095    2404 round_trippers.go:580]     Audit-Id: 4d47e2dc-b94b-4d5e-aa01-c98ac95300ef
	I0318 13:10:05.450095    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:05.450095    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:05.450095    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:05.960627    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:05.960627    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:05.960627    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:05.960627    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:05.964233    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:05.965266    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:05.965266    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:05.965266    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:05.965266    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:05.965329    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:05.965329    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:05 GMT
	I0318 13:10:05.965329    2404 round_trippers.go:580]     Audit-Id: d03f6f45-a2fe-41fb-bf32-078eae00249b
	I0318 13:10:05.965434    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:06.446471    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:06.446544    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:06.446544    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:06.446544    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:06.450365    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:06.450541    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:06.450541    2404 round_trippers.go:580]     Audit-Id: 40a765e0-b0f0-4802-b45d-7fa09cbc446d
	I0318 13:10:06.450541    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:06.450541    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:06.450541    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:06.450541    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:06.450541    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:06 GMT
	I0318 13:10:06.450673    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:06.958263    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:06.958502    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:06.958502    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:06.958502    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:06.963804    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:06.963804    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:06.963804    2404 round_trippers.go:580]     Audit-Id: 3feca258-db3b-4c16-9451-f2d1f6c409e8
	I0318 13:10:06.963804    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:06.963804    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:06.963804    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:06.963804    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:06.963804    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:06 GMT
	I0318 13:10:06.964033    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:06.964536    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:07.445803    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:07.446087    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:07.446087    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:07.446087    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:07.450938    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:07.451180    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:07.451180    2404 round_trippers.go:580]     Audit-Id: 77b0a11a-9223-4a51-aaf2-e165d60ddbb6
	I0318 13:10:07.451180    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:07.451180    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:07.451180    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:07.451180    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:07.451180    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:07 GMT
	I0318 13:10:07.451536    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:07.956760    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:07.957105    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:07.957105    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:07.957105    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:07.960512    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:07.960512    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:07.960512    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:07.961201    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:07.961201    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:07 GMT
	I0318 13:10:07.961201    2404 round_trippers.go:580]     Audit-Id: 184512e5-2787-4e74-b8e1-562d4e13b3c1
	I0318 13:10:07.961201    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:07.961201    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:07.961412    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:08.456318    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:08.456387    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:08.456387    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:08.456387    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:08.462687    2404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:10:08.462687    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:08.462687    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:08.462687    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:08 GMT
	I0318 13:10:08.462687    2404 round_trippers.go:580]     Audit-Id: b10193c9-d0e8-4874-b76b-7cad7c36fee4
	I0318 13:10:08.462687    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:08.462687    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:08.462687    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:08.462687    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:08.954573    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:08.954573    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:08.954573    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:08.954573    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:08.958187    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:08.958638    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:08.958695    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:08.958695    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:08.958695    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:08 GMT
	I0318 13:10:08.958695    2404 round_trippers.go:580]     Audit-Id: 92164a09-a7b1-4ab6-9357-d3c935538b7d
	I0318 13:10:08.958695    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:08.958695    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:08.958695    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:09.456393    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:09.456393    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:09.456393    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:09.456393    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:09.460106    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:09.460106    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:09.460106    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:09.460106    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:09.460106    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:09.460106    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:09.460106    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:09 GMT
	I0318 13:10:09.460106    2404 round_trippers.go:580]     Audit-Id: 0960e2a4-039b-4ac5-ba19-8555f0a5d7e2
	I0318 13:10:09.461470    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:09.462149    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:09.958808    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:09.958839    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:09.958839    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:09.958839    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:09.963469    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:09.963469    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:09.963469    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:09.963469    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:09.963469    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:09.963469    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:09 GMT
	I0318 13:10:09.963469    2404 round_trippers.go:580]     Audit-Id: 8f10548e-9792-4cac-a79c-d5e57007488f
	I0318 13:10:09.963469    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:09.963578    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:10.460903    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:10.461000    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:10.461141    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:10.461141    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:10.466097    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:10.466432    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:10.466432    2404 round_trippers.go:580]     Audit-Id: b8845dea-90c1-4cf6-ae98-592c2e340500
	I0318 13:10:10.466432    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:10.466432    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:10.466432    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:10.466432    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:10.466432    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:10 GMT
	I0318 13:10:10.466966    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:10.960734    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:10.960791    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:10.960849    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:10.960849    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:10.965013    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:10.965071    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:10.965071    2404 round_trippers.go:580]     Audit-Id: a9718e1b-6d11-46eb-881d-1499b7e37c81
	I0318 13:10:10.965071    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:10.965071    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:10.965071    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:10.965071    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:10.965071    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:10 GMT
	I0318 13:10:10.965339    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:11.447689    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:11.447689    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:11.447689    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:11.447689    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:11.451079    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:11.451079    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:11.451466    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:11.451466    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:11.451466    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:11.451466    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:11.451466    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:11 GMT
	I0318 13:10:11.451466    2404 round_trippers.go:580]     Audit-Id: bb76a560-39dc-4620-9d19-ff6d2cf30490
	I0318 13:10:11.451947    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:11.949552    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:11.949623    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:11.949623    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:11.949623    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:11.954032    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:11.954032    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:11.954032    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:11.954032    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:11.954451    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:11.954451    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:11.954451    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:11 GMT
	I0318 13:10:11.954451    2404 round_trippers.go:580]     Audit-Id: a99b3117-253c-4208-b02f-07a6080d9472
	I0318 13:10:11.954634    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:11.955190    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:12.447838    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:12.448041    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:12.448041    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:12.448041    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:12.451469    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:12.451687    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:12.451687    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:12.451687    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:12.451687    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:12.451687    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:12.451687    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:12 GMT
	I0318 13:10:12.451687    2404 round_trippers.go:580]     Audit-Id: 9074956d-e620-4e9e-b57b-2ead3b03d5c5
	I0318 13:10:12.452290    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:12.960141    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:12.960141    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:12.960141    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:12.960141    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:12.963744    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:12.963744    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:12.963744    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:12.963744    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:12.963744    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:12.963744    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:12 GMT
	I0318 13:10:12.963744    2404 round_trippers.go:580]     Audit-Id: b76c0bce-634f-4bd5-ae41-92776d28b024
	I0318 13:10:12.963744    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:12.964718    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:13.452540    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:13.452610    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:13.452610    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:13.452610    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:13.459398    2404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:10:13.460073    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:13.460073    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:13 GMT
	I0318 13:10:13.460073    2404 round_trippers.go:580]     Audit-Id: 98780845-e469-42ec-8fec-3f27f54241b5
	I0318 13:10:13.460073    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:13.460137    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:13.460156    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:13.460156    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:13.460297    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:13.954198    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:13.954290    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:13.954290    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:13.954290    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:13.957700    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:13.958005    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:13.958005    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:13.958005    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:13.958005    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:13.958005    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:13.958005    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:13 GMT
	I0318 13:10:13.958005    2404 round_trippers.go:580]     Audit-Id: a3cd9531-543e-4677-8d64-c86ef137b2d9
	I0318 13:10:13.958294    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:13.958828    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:14.451568    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:14.451568    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:14.451568    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:14.451718    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:14.456161    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:14.456161    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:14.456161    2404 round_trippers.go:580]     Audit-Id: d1e2dba7-0656-4172-8dc9-b0479a11c7da
	I0318 13:10:14.456161    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:14.456161    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:14.456161    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:14.457136    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:14.457136    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:14 GMT
	I0318 13:10:14.457428    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:14.949706    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:14.949940    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:14.949940    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:14.949940    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:14.953457    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:14.953999    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:14.953999    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:14 GMT
	I0318 13:10:14.953999    2404 round_trippers.go:580]     Audit-Id: c41c73c9-eea4-48db-a937-7acce3f658c8
	I0318 13:10:14.953999    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:14.953999    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:14.953999    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:14.953999    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:14.954225    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:15.451441    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:15.451441    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:15.451441    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:15.451441    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:15.454753    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:15.455453    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:15.455453    2404 round_trippers.go:580]     Audit-Id: a319ed4d-3750-47b9-85c0-73c0ef5bc931
	I0318 13:10:15.455453    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:15.455453    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:15.455453    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:15.455453    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:15.455453    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:15 GMT
	I0318 13:10:15.455715    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:15.954609    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:15.954662    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:15.954662    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:15.954662    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:15.959070    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:15.959070    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:15.959070    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:15.959070    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:15 GMT
	I0318 13:10:15.959070    2404 round_trippers.go:580]     Audit-Id: 9b6007c9-c216-4f8c-978b-7773fda4d5ad
	I0318 13:10:15.959070    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:15.959070    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:15.959070    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:15.959070    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:15.959791    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:16.457469    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:16.457469    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:16.457469    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:16.457469    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:16.461061    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:16.461061    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:16.461061    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:16.461061    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:16.461061    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:16 GMT
	I0318 13:10:16.461061    2404 round_trippers.go:580]     Audit-Id: 51beb58c-1889-4dc7-93ab-699f206807fd
	I0318 13:10:16.461061    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:16.461061    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:16.461835    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:16.957913    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:16.957913    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:16.957913    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:16.957913    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:16.962673    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:16.962796    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:16.962796    2404 round_trippers.go:580]     Audit-Id: 44ee5e70-c27a-4e8d-9209-f06fd814c62c
	I0318 13:10:16.962796    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:16.962796    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:16.962796    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:16.962796    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:16.962796    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:16 GMT
	I0318 13:10:16.963103    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:17.459123    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:17.459402    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:17.459402    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:17.459402    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:17.463671    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:17.464365    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:17.464365    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:17.464365    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:17.464365    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:17.464365    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:17 GMT
	I0318 13:10:17.464365    2404 round_trippers.go:580]     Audit-Id: a74bbc73-0d69-4b6e-bf83-15fae336077d
	I0318 13:10:17.464365    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:17.464948    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:17.957734    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:17.957983    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:17.958084    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:17.958084    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:17.961922    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:17.961922    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:17.962932    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:17.962932    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:17 GMT
	I0318 13:10:17.962932    2404 round_trippers.go:580]     Audit-Id: b73a7424-5407-4bfb-861f-466588ec9ac9
	I0318 13:10:17.962932    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:17.962932    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:17.962932    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:17.963050    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:17.963613    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:18.452683    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:18.452784    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:18.452784    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:18.452784    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:18.456739    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:18.456739    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:18.456980    2404 round_trippers.go:580]     Audit-Id: fe0fa0ed-d33f-42ca-a404-f077659b24f9
	I0318 13:10:18.456980    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:18.456980    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:18.456980    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:18.456980    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:18.456980    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:18 GMT
	I0318 13:10:18.457175    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:18.949639    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:18.949706    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:18.949706    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:18.949706    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:18.952302    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:18.953210    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:18.953210    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:18.953210    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:18.953210    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:18.953210    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:18 GMT
	I0318 13:10:18.953210    2404 round_trippers.go:580]     Audit-Id: 0f430138-4404-4c3a-90fc-f651386ca56f
	I0318 13:10:18.953210    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:18.953457    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:19.448788    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:19.448788    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:19.448788    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:19.448788    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:19.454859    2404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:10:19.454859    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:19.454859    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:19.454859    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:19.454859    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:19.454859    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:19.454859    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:19 GMT
	I0318 13:10:19.454859    2404 round_trippers.go:580]     Audit-Id: df462a2e-63bc-4b37-9796-3e53f4b3716e
	I0318 13:10:19.455556    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:19.947973    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:19.947973    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:19.947973    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:19.947973    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:19.951612    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:19.951612    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:19.951612    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:19.951612    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:19.951612    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:19.951612    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:19 GMT
	I0318 13:10:19.951612    2404 round_trippers.go:580]     Audit-Id: ecd897cd-7eba-4e8d-8cc3-b3dd8125d85d
	I0318 13:10:19.951612    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:19.954525    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:20.449467    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:20.449576    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:20.449576    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:20.449576    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:20.454171    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:20.454171    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:20.454171    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:20.454171    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:20.454171    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:20.454171    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:20 GMT
	I0318 13:10:20.454171    2404 round_trippers.go:580]     Audit-Id: 4b1a9a67-6921-41ef-a59d-2532948432a0
	I0318 13:10:20.454171    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:20.454171    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:20.455204    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:20.953929    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:20.953929    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:20.953987    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:20.953987    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:20.957506    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:20.957506    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:20.957506    2404 round_trippers.go:580]     Audit-Id: cb76fe58-aa52-4af5-9e75-9a0b96392e5a
	I0318 13:10:20.957506    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:20.957506    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:20.957506    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:20.957506    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:20.957506    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:20 GMT
	I0318 13:10:20.957506    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:21.457349    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:21.457349    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:21.457349    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:21.457349    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:21.461638    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:21.461933    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:21.461933    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:21.461933    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:21 GMT
	I0318 13:10:21.461933    2404 round_trippers.go:580]     Audit-Id: 181776df-72b6-490e-93e2-1f42fa4b3129
	I0318 13:10:21.461933    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:21.461933    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:21.461933    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:21.461933    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:21.946529    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:21.946624    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:21.946624    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:21.946624    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:21.951796    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:10:21.951796    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:21.951796    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:21.951796    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:21 GMT
	I0318 13:10:21.951796    2404 round_trippers.go:580]     Audit-Id: eda1eabb-df71-4de2-91c1-ebdcb0b290e1
	I0318 13:10:21.951796    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:21.951796    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:21.951796    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:21.951971    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:22.447221    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:22.447221    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:22.447221    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:22.447221    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:22.452096    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:22.452746    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:22.452746    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:22 GMT
	I0318 13:10:22.452746    2404 round_trippers.go:580]     Audit-Id: 1ef1cce8-ac9c-457a-a427-89c0439eb78c
	I0318 13:10:22.452746    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:22.452746    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:22.452746    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:22.452746    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:22.453145    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:22.949832    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:22.949832    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:22.949832    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:22.949921    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:22.953903    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:22.953903    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:22.953903    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:22.954010    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:22.954010    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:22.954010    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:22.954010    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:22 GMT
	I0318 13:10:22.954010    2404 round_trippers.go:580]     Audit-Id: 4e6ce44a-6058-4f9b-975b-db0d0c68119e
	I0318 13:10:22.954162    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:22.954818    2404 node_ready.go:53] node "multinode-894400" has status "Ready":"False"
	I0318 13:10:23.455394    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:23.455394    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:23.455394    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:23.455394    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:23.458392    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:23.458392    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:23.458392    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:23 GMT
	I0318 13:10:23.458392    2404 round_trippers.go:580]     Audit-Id: 52393d7a-a7be-4560-84e8-824f4c00f7fc
	I0318 13:10:23.458392    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:23.459414    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:23.459414    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:23.459463    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:23.459615    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1834","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0318 13:10:23.955922    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:23.955998    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:23.955998    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:23.955998    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:23.959403    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:23.959474    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:23.959474    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:23.959474    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:23.959535    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:23 GMT
	I0318 13:10:23.959535    2404 round_trippers.go:580]     Audit-Id: ce4ed3a1-1a8e-4cf0-8f31-0a444c59da8b
	I0318 13:10:23.959535    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:23.959535    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:23.959803    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1876","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 13:10:23.960950    2404 node_ready.go:49] node "multinode-894400" has status "Ready":"True"
	I0318 13:10:23.960950    2404 node_ready.go:38] duration metric: took 30.0157853s for node "multinode-894400" to be "Ready" ...
	I0318 13:10:23.961036    2404 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:10:23.961205    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods
	I0318 13:10:23.961229    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:23.961229    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:23.961229    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:23.966538    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:23.966538    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:23.966538    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:23.966538    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:23.966538    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:23.966538    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:23.966538    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:23 GMT
	I0318 13:10:23.966538    2404 round_trippers.go:580]     Audit-Id: 76e194c8-49d5-48a1-9ad2-d3b7df829c1c
	I0318 13:10:23.967898    2404 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1876"},"items":[{"metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83068 chars]
	I0318 13:10:23.971890    2404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-456tm" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:23.971987    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:23.972093    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:23.972093    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:23.972161    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:23.976107    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:23.976107    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:23.976565    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:23 GMT
	I0318 13:10:23.976631    2404 round_trippers.go:580]     Audit-Id: 01762c1a-0ce6-4c4b-82bf-3c1a15d29800
	I0318 13:10:23.976631    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:23.976631    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:23.976685    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:23.976685    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:23.976898    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:23.977232    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:23.977232    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:23.977232    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:23.977232    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:23.980690    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:23.980910    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:23.981014    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:23.981014    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:23.981047    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:23.981047    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:23.981047    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:23 GMT
	I0318 13:10:23.981089    2404 round_trippers.go:580]     Audit-Id: f7dbb1dc-653f-4742-b559-24e9683203e0
	I0318 13:10:23.981290    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1876","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 13:10:24.486217    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:24.486322    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:24.486322    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:24.486322    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:24.489657    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:24.489657    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:24.489964    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:24 GMT
	I0318 13:10:24.489964    2404 round_trippers.go:580]     Audit-Id: e55648ea-af22-4aa1-a05f-8e23e423879e
	I0318 13:10:24.489964    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:24.489964    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:24.489964    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:24.489964    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:24.490442    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:24.491255    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:24.491322    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:24.491322    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:24.491322    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:24.494170    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:24.494170    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:24.494170    2404 round_trippers.go:580]     Audit-Id: 41a6738b-6689-4c42-8489-41edee4a73e2
	I0318 13:10:24.494170    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:24.494170    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:24.494831    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:24.494831    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:24.494831    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:24 GMT
	I0318 13:10:24.494902    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1876","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 13:10:24.986265    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:24.986265    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:24.986265    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:24.986366    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:24.990679    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:24.990679    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:24.990679    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:24.990679    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:24.990679    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:24.990679    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:24.990679    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:24 GMT
	I0318 13:10:24.990679    2404 round_trippers.go:580]     Audit-Id: 6ab673c0-267f-4ae5-94b1-781093f390ca
	I0318 13:10:24.990679    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:24.992005    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:24.992005    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:24.992138    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:24.992138    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:24.994917    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:24.995512    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:24.995512    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:24 GMT
	I0318 13:10:24.995512    2404 round_trippers.go:580]     Audit-Id: a07a0c26-0872-4870-8af8-2ff57e789f5c
	I0318 13:10:24.995512    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:24.995512    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:24.995512    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:24.995512    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:24.995735    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1876","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 13:10:25.485437    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:25.485437    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:25.485437    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:25.485437    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:25.490035    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:25.490035    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:25.490035    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:25.490035    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:25 GMT
	I0318 13:10:25.491072    2404 round_trippers.go:580]     Audit-Id: 40bb2202-cbbd-403f-9d0e-3e0d7d51787c
	I0318 13:10:25.491072    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:25.491072    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:25.491072    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:25.491312    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:25.491991    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:25.491991    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:25.491991    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:25.491991    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:25.495333    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:25.495333    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:25.495333    2404 round_trippers.go:580]     Audit-Id: 84eeb348-5eb7-4848-b68d-aba04f17caeb
	I0318 13:10:25.495333    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:25.495532    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:25.495532    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:25.495532    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:25.495532    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:25 GMT
	I0318 13:10:25.495692    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1876","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 13:10:25.983050    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:25.983050    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:25.983050    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:25.983050    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:25.986650    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:25.987571    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:25.987571    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:25 GMT
	I0318 13:10:25.987571    2404 round_trippers.go:580]     Audit-Id: e9e35350-dae8-47e8-bd19-0f6c7143717e
	I0318 13:10:25.987571    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:25.987571    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:25.987571    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:25.987571    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:25.987835    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:25.988470    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:25.988470    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:25.988470    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:25.988470    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:25.992111    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:25.992551    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:25.992551    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:25.992551    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:25 GMT
	I0318 13:10:25.992551    2404 round_trippers.go:580]     Audit-Id: 9d87041a-612e-4137-9d70-5b7114f82886
	I0318 13:10:25.992551    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:25.992551    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:25.992551    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:25.992551    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1876","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 13:10:25.993233    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:26.484566    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:26.484642    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:26.484642    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:26.484642    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:26.488886    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:26.488886    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:26.488886    2404 round_trippers.go:580]     Audit-Id: dc844628-f946-47d0-8da1-6fde0e5ba0a2
	I0318 13:10:26.489250    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:26.489250    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:26.489250    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:26.489250    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:26.489250    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:26 GMT
	I0318 13:10:26.489382    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:26.490218    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:26.490218    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:26.490218    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:26.490328    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:26.494148    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:26.494220    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:26.494220    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:26.494220    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:26.494290    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:26.494290    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:26 GMT
	I0318 13:10:26.494290    2404 round_trippers.go:580]     Audit-Id: b315bac9-1feb-4a08-8c56-9f19f331d953
	I0318 13:10:26.494290    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:26.494520    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1876","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0318 13:10:26.984324    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:26.984378    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:26.984468    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:26.984468    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:26.988256    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:26.988491    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:26.988587    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:26.988587    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:26.988587    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:26 GMT
	I0318 13:10:26.988587    2404 round_trippers.go:580]     Audit-Id: b1658a5e-d989-4bd6-b840-c172db6d0e35
	I0318 13:10:26.988587    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:26.988587    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:26.988862    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:26.989619    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:26.989679    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:26.989679    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:26.989679    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:26.991911    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:26.992849    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:26.992849    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:26.992849    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:26.992849    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:26.992933    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:26.992950    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:26 GMT
	I0318 13:10:26.992950    2404 round_trippers.go:580]     Audit-Id: 7b42ef74-b5ef-45d9-b4eb-c984c3044972
	I0318 13:10:26.993085    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:27.486095    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:27.486095    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:27.486181    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:27.486181    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:27.492716    2404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:10:27.492716    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:27.492716    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:27.492716    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:27.492716    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:27 GMT
	I0318 13:10:27.492716    2404 round_trippers.go:580]     Audit-Id: 36efddde-f072-46ce-91c2-b09c372854e1
	I0318 13:10:27.492716    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:27.492716    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:27.492716    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:27.493503    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:27.493503    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:27.494052    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:27.494052    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:27.497170    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:27.497170    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:27.497170    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:27.497170    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:27 GMT
	I0318 13:10:27.497170    2404 round_trippers.go:580]     Audit-Id: 955ea648-8889-43d4-a874-6e63af841a31
	I0318 13:10:27.497170    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:27.497170    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:27.497170    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:27.497170    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:27.986450    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:27.986450    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:27.986450    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:27.986450    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:27.991061    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:27.991061    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:27.991061    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:27.991361    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:27.991361    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:27 GMT
	I0318 13:10:27.991361    2404 round_trippers.go:580]     Audit-Id: b13fa394-33f2-40a3-8950-9926f7bc8143
	I0318 13:10:27.991361    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:27.991361    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:27.991636    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:27.992275    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:27.992275    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:27.992275    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:27.992275    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:27.996144    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:27.996144    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:27.996144    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:27.996144    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:27.996144    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:27 GMT
	I0318 13:10:27.996144    2404 round_trippers.go:580]     Audit-Id: 30d39a30-ebbc-47ae-80ae-362552ea7136
	I0318 13:10:27.996144    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:27.996144    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:27.996737    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:27.997551    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:28.485472    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:28.485549    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:28.485549    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:28.485549    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:28.488936    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:28.488936    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:28.488936    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:28.488936    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:28.488936    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:28 GMT
	I0318 13:10:28.488936    2404 round_trippers.go:580]     Audit-Id: 9b753aa9-7829-4098-b7ba-fce041c78ec0
	I0318 13:10:28.489259    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:28.489259    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:28.489462    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:28.489700    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:28.490245    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:28.490245    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:28.490245    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:28.492335    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:28.492335    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:28.493360    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:28.493360    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:28.493360    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:28.493360    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:28 GMT
	I0318 13:10:28.493360    2404 round_trippers.go:580]     Audit-Id: cb7c4b4c-bcf4-4634-9f73-f7d68a6445bc
	I0318 13:10:28.493360    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:28.493631    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:28.986111    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:28.986222    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:28.986222    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:28.986222    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:28.991126    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:28.991649    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:28.991702    2404 round_trippers.go:580]     Audit-Id: e00fea20-0f98-473b-ab76-2d172d13ded9
	I0318 13:10:28.991702    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:28.991702    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:28.991702    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:28.991702    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:28.991702    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:28 GMT
	I0318 13:10:28.991702    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:28.992831    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:28.992916    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:28.992916    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:28.992916    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:28.995279    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:28.996053    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:28.996053    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:28 GMT
	I0318 13:10:28.996053    2404 round_trippers.go:580]     Audit-Id: df6faf8d-042d-48a7-bc75-ab4d43f3545f
	I0318 13:10:28.996053    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:28.996053    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:28.996053    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:28.996136    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:28.996418    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:29.481925    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:29.482144    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:29.482144    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:29.482144    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:29.485740    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:29.485740    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:29.485988    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:29 GMT
	I0318 13:10:29.485988    2404 round_trippers.go:580]     Audit-Id: adda2e37-95a3-470e-ba69-2c16041e02e3
	I0318 13:10:29.485988    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:29.485988    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:29.485988    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:29.485988    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:29.486215    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:29.486888    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:29.486991    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:29.486991    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:29.486991    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:29.490146    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:29.490146    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:29.490146    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:29.490146    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:29.490146    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:29.490146    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:29.490631    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:29 GMT
	I0318 13:10:29.490631    2404 round_trippers.go:580]     Audit-Id: 818b4998-01b4-498e-81b4-d60c8f66314e
	I0318 13:10:29.490874    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:29.980157    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:29.980233    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:29.980233    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:29.980233    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:29.983639    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:29.984629    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:29.984629    2404 round_trippers.go:580]     Audit-Id: e5d48e9c-f06f-4490-9485-95af0a6cd373
	I0318 13:10:29.984629    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:29.984629    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:29.984629    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:29.984629    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:29.984629    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:29 GMT
	I0318 13:10:29.984865    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:29.985686    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:29.985686    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:29.985686    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:29.985686    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:29.988049    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:29.988825    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:29.989025    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:29.989025    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:29.989025    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:29 GMT
	I0318 13:10:29.989116    2404 round_trippers.go:580]     Audit-Id: 3efc423c-7fd3-4c81-937a-437188194784
	I0318 13:10:29.989116    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:29.989116    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:29.989172    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:30.480134    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:30.480371    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:30.480371    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:30.480371    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:30.484729    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:30.484927    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:30.484927    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:30 GMT
	I0318 13:10:30.484927    2404 round_trippers.go:580]     Audit-Id: bf760774-86d0-4465-8188-fbeb61f5f83c
	I0318 13:10:30.484927    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:30.484927    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:30.484927    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:30.484927    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:30.485349    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:30.486129    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:30.486206    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:30.486206    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:30.486206    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:30.489485    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:30.489485    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:30.489683    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:30.489683    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:30.489683    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:30.489683    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:30.489683    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:30 GMT
	I0318 13:10:30.489683    2404 round_trippers.go:580]     Audit-Id: e4a46c67-ca0e-450a-a433-573834ae28ef
	I0318 13:10:30.490112    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:30.490112    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:30.976291    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:30.976291    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:30.976291    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:30.976291    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:30.979896    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:30.979896    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:30.979896    2404 round_trippers.go:580]     Audit-Id: 022dea84-f15d-4408-b781-0dc856df1c22
	I0318 13:10:30.979896    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:30.979896    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:30.979896    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:30.979896    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:30.980977    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:30 GMT
	I0318 13:10:30.980977    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:30.981945    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:30.982065    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:30.982065    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:30.982065    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:30.985262    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:30.985262    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:30.985330    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:30.985330    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:30.985330    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:30.985330    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:30 GMT
	I0318 13:10:30.985330    2404 round_trippers.go:580]     Audit-Id: b7e349b6-a711-4f4d-8ca3-f2ad1a6f98ea
	I0318 13:10:30.985330    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:30.985419    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:31.482463    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:31.482463    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:31.482463    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:31.482784    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:31.488517    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:10:31.488517    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:31.488517    2404 round_trippers.go:580]     Audit-Id: b84f4f87-4c2c-4f42-94ad-6189ebd1f342
	I0318 13:10:31.488517    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:31.488517    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:31.488517    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:31.488517    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:31.488517    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:31 GMT
	I0318 13:10:31.489206    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:31.489903    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:31.489966    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:31.489966    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:31.489966    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:31.492581    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:31.492581    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:31.492581    2404 round_trippers.go:580]     Audit-Id: 66dff697-da3e-4a83-9268-15ccac111ab1
	I0318 13:10:31.492581    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:31.492581    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:31.492581    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:31.492581    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:31.492581    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:31 GMT
	I0318 13:10:31.493265    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:31.985340    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:31.985399    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:31.985399    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:31.985399    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:31.989173    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:31.989173    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:31.989173    2404 round_trippers.go:580]     Audit-Id: f0badea2-2196-41c4-8aa9-b270368504ac
	I0318 13:10:31.989173    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:31.989734    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:31.989734    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:31.989734    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:31.989734    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:31 GMT
	I0318 13:10:31.989936    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:31.990607    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:31.990680    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:31.990680    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:31.990680    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:31.993957    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:31.993957    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:31.993957    2404 round_trippers.go:580]     Audit-Id: 188b75ac-1f3a-43d6-8f83-7e1062bb8012
	I0318 13:10:31.993957    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:31.993957    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:31.993957    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:31.993957    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:31.993957    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:31 GMT
	I0318 13:10:31.994925    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:32.486951    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:32.487069    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:32.487069    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:32.487069    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:32.491821    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:32.491821    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:32.491874    2404 round_trippers.go:580]     Audit-Id: cace9841-57d9-4a9c-92cd-08e35484b6c4
	I0318 13:10:32.491898    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:32.491898    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:32.491898    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:32.491898    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:32.491898    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:32 GMT
	I0318 13:10:32.492128    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:32.492691    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:32.492845    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:32.492845    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:32.492845    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:32.497249    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:32.497249    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:32.497249    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:32 GMT
	I0318 13:10:32.497249    2404 round_trippers.go:580]     Audit-Id: 1634843b-e912-4a6e-b912-1718ccb20092
	I0318 13:10:32.497249    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:32.497249    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:32.497249    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:32.497249    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:32.497788    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:32.497966    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:32.985710    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:32.985788    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:32.985860    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:32.985860    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:32.989499    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:32.990073    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:32.990073    2404 round_trippers.go:580]     Audit-Id: 62c7f7be-a117-4726-a4d7-c0dbb18903c2
	I0318 13:10:32.990073    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:32.990073    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:32.990073    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:32.990073    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:32.990073    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:32 GMT
	I0318 13:10:32.990154    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:32.991130    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:32.991130    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:32.991130    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:32.991130    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:32.994309    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:32.994309    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:32.994697    2404 round_trippers.go:580]     Audit-Id: ab3116cc-818a-4fd0-be71-5f5d3533a649
	I0318 13:10:32.994697    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:32.994697    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:32.994697    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:32.994697    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:32.994697    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:32 GMT
	I0318 13:10:32.995048    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:33.472660    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:33.472735    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:33.472735    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:33.472805    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:33.478531    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:10:33.478531    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:33.478531    2404 round_trippers.go:580]     Audit-Id: a2c726e5-1509-4745-9875-edde56fd2629
	I0318 13:10:33.478531    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:33.478531    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:33.478531    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:33.478531    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:33.478531    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:33 GMT
	I0318 13:10:33.478531    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:33.479249    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:33.479249    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:33.479249    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:33.479249    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:33.485713    2404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:10:33.485713    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:33.485713    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:33.485713    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:33.485713    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:33.485713    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:33.485713    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:33 GMT
	I0318 13:10:33.485713    2404 round_trippers.go:580]     Audit-Id: 65a5867d-36a7-4b11-9e34-c388e1944bf3
	I0318 13:10:33.486414    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:33.974147    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:33.974147    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:33.974147    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:33.974147    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:33.978424    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:33.978424    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:33.978424    2404 round_trippers.go:580]     Audit-Id: c82e36db-de54-4064-9b82-0a6f523528a3
	I0318 13:10:33.978424    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:33.978424    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:33.978424    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:33.978424    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:33.978424    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:33 GMT
	I0318 13:10:33.978697    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:33.979370    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:33.979370    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:33.979370    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:33.979370    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:33.982390    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:33.982390    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:33.982390    2404 round_trippers.go:580]     Audit-Id: 9e581598-ba84-4d0d-b258-433897e01a34
	I0318 13:10:33.982390    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:33.982559    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:33.982559    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:33.982559    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:33.982559    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:33 GMT
	I0318 13:10:33.982773    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:34.473521    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:34.473521    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:34.473521    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:34.473521    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:34.477520    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:34.477520    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:34.477520    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:34.477520    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:34.477611    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:34.477611    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:34 GMT
	I0318 13:10:34.477611    2404 round_trippers.go:580]     Audit-Id: 6f670d38-b757-4647-ba3b-bfa2b39ff2c6
	I0318 13:10:34.477611    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:34.477661    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:34.478712    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:34.478712    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:34.478712    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:34.478712    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:34.481563    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:34.481563    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:34.481842    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:34.481842    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:34 GMT
	I0318 13:10:34.481842    2404 round_trippers.go:580]     Audit-Id: 5652c4f4-f3cc-496f-9275-687adb772f28
	I0318 13:10:34.481842    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:34.481842    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:34.481842    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:34.481979    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:34.974867    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:34.974995    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:34.974995    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:34.974995    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:34.980342    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:10:34.981437    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:34.981551    2404 round_trippers.go:580]     Audit-Id: 59a88a89-e66a-467b-947b-40492ac89e12
	I0318 13:10:34.981551    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:34.981551    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:34.981551    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:34.981551    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:34.981551    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:34 GMT
	I0318 13:10:34.981769    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:34.982535    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:34.982619    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:34.982619    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:34.982619    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:34.984909    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:34.984909    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:34.984909    2404 round_trippers.go:580]     Audit-Id: 089b7c01-1014-4b75-bdfc-91bb2ca95461
	I0318 13:10:34.984909    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:34.984909    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:34.984909    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:34.984909    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:34.984909    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:34 GMT
	I0318 13:10:34.986141    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:34.986596    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:35.479394    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:35.479394    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:35.479394    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:35.479394    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:35.483072    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:35.483072    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:35.483072    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:35.483072    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:35.483072    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:35.483939    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:35 GMT
	I0318 13:10:35.483939    2404 round_trippers.go:580]     Audit-Id: 388f4493-f322-4f70-921c-e314e4b7e41d
	I0318 13:10:35.483939    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:35.484243    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:35.484560    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:35.484560    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:35.484560    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:35.484560    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:35.488209    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:35.488699    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:35.488699    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:35.488761    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:35 GMT
	I0318 13:10:35.488761    2404 round_trippers.go:580]     Audit-Id: 3afb7b9d-d6a4-45a6-9a44-b8b4d95bbccc
	I0318 13:10:35.488761    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:35.488761    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:35.488761    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:35.488761    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:35.981183    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:35.981183    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:35.981183    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:35.981183    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:35.985146    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:35.985146    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:35.986181    2404 round_trippers.go:580]     Audit-Id: b48a2525-b880-4539-96ec-fc4f64bbc024
	I0318 13:10:35.986181    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:35.986210    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:35.986210    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:35.986210    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:35.986210    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:35 GMT
	I0318 13:10:35.986369    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:35.987343    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:35.987343    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:35.987451    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:35.987451    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:35.990552    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:35.990552    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:35.990552    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:35.990552    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:35.990552    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:35.990552    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:35.990552    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:35 GMT
	I0318 13:10:35.990552    2404 round_trippers.go:580]     Audit-Id: f9e4aa2f-ea72-4cf7-a8da-8010b5b01a22
	I0318 13:10:35.990552    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:36.478393    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:36.478464    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:36.478464    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:36.478464    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:36.482282    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:36.482282    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:36.482282    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:36.483023    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:36.483023    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:36.483023    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:36 GMT
	I0318 13:10:36.483023    2404 round_trippers.go:580]     Audit-Id: f731bacd-ce18-4f83-9a42-9991215a912b
	I0318 13:10:36.483023    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:36.483202    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:36.483955    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:36.483955    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:36.483955    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:36.484041    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:36.487021    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:36.487241    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:36.487241    2404 round_trippers.go:580]     Audit-Id: 21337908-e894-4ae3-a36c-3101d291fc50
	I0318 13:10:36.487241    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:36.487241    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:36.487241    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:36.487241    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:36.487333    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:36 GMT
	I0318 13:10:36.487672    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:36.980680    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:36.980680    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:36.980680    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:36.980680    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:36.985304    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:36.985348    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:36.985426    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:36.985426    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:36.985426    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:36.985426    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:36.985426    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:36 GMT
	I0318 13:10:36.985426    2404 round_trippers.go:580]     Audit-Id: 26456e40-c92d-4d3f-81b9-9a2789ea4b89
	I0318 13:10:36.985647    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:36.986465    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:36.986465    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:36.986465    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:36.986556    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:36.990416    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:36.991163    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:36.991244    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:36.991282    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:36 GMT
	I0318 13:10:36.991282    2404 round_trippers.go:580]     Audit-Id: 25592187-1131-420e-bf8e-678166d05c76
	I0318 13:10:36.991282    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:36.991282    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:36.991282    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:36.992536    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:36.993054    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:37.484360    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:37.484360    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:37.484360    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:37.484360    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:37.488962    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:37.489183    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:37.489183    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:37.489183    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:37.489238    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:37.489238    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:37.489238    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:37 GMT
	I0318 13:10:37.489238    2404 round_trippers.go:580]     Audit-Id: f37d7a9a-b1be-405e-b7f9-d8436bf54c63
	I0318 13:10:37.489779    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:37.490640    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:37.490640    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:37.490640    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:37.490640    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:37.493408    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:37.493408    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:37.493408    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:37.493408    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:37.493408    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:37.493408    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:37 GMT
	I0318 13:10:37.493408    2404 round_trippers.go:580]     Audit-Id: e92212cc-2569-4cda-a6aa-88559f11adde
	I0318 13:10:37.493408    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:37.494520    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:37.981483    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:37.981483    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:37.981483    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:37.981483    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:37.984967    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:37.985723    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:37.985723    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:37 GMT
	I0318 13:10:37.985723    2404 round_trippers.go:580]     Audit-Id: a31be817-ad2c-4165-acb8-efef0f2c3742
	I0318 13:10:37.985723    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:37.985723    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:37.985723    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:37.985723    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:37.986009    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:37.986196    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:37.986196    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:37.986196    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:37.986763    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:37.988971    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:37.988971    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:37.989814    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:37.989814    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:37.989814    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:37 GMT
	I0318 13:10:37.989814    2404 round_trippers.go:580]     Audit-Id: f78565e7-f9b9-4083-83d2-3ee887d3e0a7
	I0318 13:10:37.989814    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:37.989814    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:37.990095    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:38.481378    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:38.481471    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:38.481471    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:38.481471    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:38.485539    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:38.485539    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:38.485539    2404 round_trippers.go:580]     Audit-Id: 010b6dac-358f-4fba-8af3-e9badbabb4e4
	I0318 13:10:38.485539    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:38.485539    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:38.485539    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:38.485539    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:38.485539    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:38 GMT
	I0318 13:10:38.486209    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:38.486947    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:38.487090    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:38.487090    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:38.487090    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:38.490278    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:38.490278    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:38.491054    2404 round_trippers.go:580]     Audit-Id: dbab6823-e52d-4362-984b-b728d33af67c
	I0318 13:10:38.491054    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:38.491054    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:38.491054    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:38.491054    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:38.491054    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:38 GMT
	I0318 13:10:38.491054    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:38.983129    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:38.983206    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:38.983276    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:38.983276    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:38.986014    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:38.986014    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:38.986933    2404 round_trippers.go:580]     Audit-Id: 92a900c9-f20c-4f14-a92f-755c600b2b17
	I0318 13:10:38.987036    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:38.987036    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:38.987036    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:38.987036    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:38.987036    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:38 GMT
	I0318 13:10:38.987215    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:38.987935    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:38.987935    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:38.987935    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:38.988102    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:38.991002    2404 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 13:10:38.991069    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:38.991069    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:38.991069    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:38 GMT
	I0318 13:10:38.991069    2404 round_trippers.go:580]     Audit-Id: a1cf8aed-f121-421f-8e73-1f44cfd9fc8e
	I0318 13:10:38.991069    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:38.991069    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:38.991069    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:38.991245    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:39.480236    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:39.480436    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:39.480436    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:39.480436    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:39.485038    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:39.485038    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:39.485038    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:39.485149    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:39.485149    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:39.485149    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:39 GMT
	I0318 13:10:39.485149    2404 round_trippers.go:580]     Audit-Id: 436de0b2-d1ab-4033-bff4-2933f5e21ed8
	I0318 13:10:39.485149    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:39.485333    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:39.486166    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:39.486166    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:39.486166    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:39.486166    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:39.489815    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:39.490421    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:39.490421    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:39.490421    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:39 GMT
	I0318 13:10:39.490421    2404 round_trippers.go:580]     Audit-Id: c28d4f29-5617-4b95-806f-a8c2b9275398
	I0318 13:10:39.490421    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:39.490421    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:39.490421    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:39.490948    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:39.491376    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:39.978579    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:39.978825    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:39.978825    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:39.978825    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:39.981705    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:39.982739    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:39.982739    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:39.982739    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:39 GMT
	I0318 13:10:39.982739    2404 round_trippers.go:580]     Audit-Id: 721e4391-2318-4fe1-9070-ea07d49524e0
	I0318 13:10:39.982739    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:39.982739    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:39.982850    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:39.983453    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:39.984261    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:39.984331    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:39.984331    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:39.984331    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:39.987609    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:39.987609    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:39.987993    2404 round_trippers.go:580]     Audit-Id: 234ae04e-f8e6-434b-af6c-1d0d64779d5a
	I0318 13:10:39.987993    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:39.987993    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:39.987993    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:39.987993    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:39.987993    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:39 GMT
	I0318 13:10:39.988271    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:40.478642    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:40.478642    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:40.478726    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:40.478726    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:40.483119    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:40.483119    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:40.483119    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:40.483119    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:40.483474    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:40 GMT
	I0318 13:10:40.483474    2404 round_trippers.go:580]     Audit-Id: dd43f2da-473b-4843-b835-92a7014ce945
	I0318 13:10:40.483474    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:40.483474    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:40.483474    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:40.484224    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:40.484796    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:40.484796    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:40.484796    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:40.489248    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:40.489307    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:40.489307    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:40 GMT
	I0318 13:10:40.489307    2404 round_trippers.go:580]     Audit-Id: c03ef6db-de90-4f04-94b0-3070e7b4e209
	I0318 13:10:40.489307    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:40.489307    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:40.489307    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:40.489307    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:40.489307    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:40.979483    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:40.979573    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:40.979573    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:40.979573    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:40.983766    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:40.984117    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:40.984117    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:40.984117    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:40.984117    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:40 GMT
	I0318 13:10:40.984117    2404 round_trippers.go:580]     Audit-Id: b802f491-d189-4191-9704-fc5fc255298c
	I0318 13:10:40.984117    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:40.984117    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:40.984117    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:40.985128    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:40.985183    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:40.985183    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:40.985183    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:40.987127    2404 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 13:10:40.988110    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:40.988110    2404 round_trippers.go:580]     Audit-Id: 4cc73082-b471-4ce2-ac48-ef04e1336155
	I0318 13:10:40.988164    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:40.988164    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:40.988164    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:40.988164    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:40.988164    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:40 GMT
	I0318 13:10:40.988164    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:41.479994    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:41.480246    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:41.480246    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:41.480246    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:41.483664    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:41.483664    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:41.483664    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:41.483664    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:41 GMT
	I0318 13:10:41.483664    2404 round_trippers.go:580]     Audit-Id: 92b2fc3a-3119-4a01-9c04-7efb853bd885
	I0318 13:10:41.483664    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:41.483664    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:41.483664    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:41.484681    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:41.485020    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:41.485020    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:41.485020    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:41.485020    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:41.488639    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:41.488639    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:41.488639    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:41.488639    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:41.488639    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:41.489475    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:41.489475    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:41 GMT
	I0318 13:10:41.489475    2404 round_trippers.go:580]     Audit-Id: 6392c288-f3c5-41f3-b1c8-9d0f3a14e2f9
	I0318 13:10:41.489621    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:41.982677    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:41.982755    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:41.982755    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:41.982755    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:41.987446    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:41.987446    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:41.988059    2404 round_trippers.go:580]     Audit-Id: cd3e56d1-5114-4b89-b2b6-7919fb813ef3
	I0318 13:10:41.988059    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:41.988059    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:41.988059    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:41.988059    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:41.988059    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:41 GMT
	I0318 13:10:41.988115    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:41.989161    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:41.989302    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:41.989302    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:41.989302    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:41.992327    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:41.992327    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:41.992327    2404 round_trippers.go:580]     Audit-Id: 35aa62e8-50de-42c0-b238-7ee56c616ee1
	I0318 13:10:41.992327    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:41.992767    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:41.992767    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:41.992767    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:41.992880    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:41 GMT
	I0318 13:10:41.992968    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:41.992968    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:42.476534    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:42.476534    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:42.476534    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:42.476534    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:42.480431    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:42.481100    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:42.481100    2404 round_trippers.go:580]     Audit-Id: 7e766fdf-4966-48ad-94b1-76c8f57f4cc6
	I0318 13:10:42.481100    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:42.481100    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:42.481100    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:42.481100    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:42.481100    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:42 GMT
	I0318 13:10:42.481311    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:42.482063    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:42.482089    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:42.482089    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:42.482089    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:42.485757    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:42.485911    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:42.485911    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:42.485911    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:42.485911    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:42.485911    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:42 GMT
	I0318 13:10:42.485911    2404 round_trippers.go:580]     Audit-Id: a61907bd-32bf-40c7-a570-157cc9b50888
	I0318 13:10:42.485911    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:42.486182    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:42.978589    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:42.978589    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:42.978589    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:42.978589    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:42.983180    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:42.983988    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:42.983988    2404 round_trippers.go:580]     Audit-Id: a79fbab4-0463-484a-a16e-fbb7cf535df9
	I0318 13:10:42.983988    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:42.983988    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:42.983988    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:42.983988    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:42.983988    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:42 GMT
	I0318 13:10:42.984359    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:42.985533    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:42.985533    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:42.985533    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:42.985533    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:42.988773    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:42.988773    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:42.988773    2404 round_trippers.go:580]     Audit-Id: f8b891a6-09b3-41da-a527-502980bde4d8
	I0318 13:10:42.988773    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:42.988773    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:42.988773    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:42.988773    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:42.988773    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:42 GMT
	I0318 13:10:42.989331    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:43.482408    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:43.482494    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:43.482494    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:43.482494    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:43.487256    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:43.487292    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:43.487292    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:43.487292    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:43 GMT
	I0318 13:10:43.487292    2404 round_trippers.go:580]     Audit-Id: f64ccdab-d361-4793-8305-b2eec35799f1
	I0318 13:10:43.487292    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:43.487292    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:43.487292    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:43.487292    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:43.488523    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:43.488523    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:43.488681    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:43.488681    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:43.494191    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:10:43.494217    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:43.494217    2404 round_trippers.go:580]     Audit-Id: ce54be56-b7be-4e13-ae9a-94b6677eded6
	I0318 13:10:43.494217    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:43.494290    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:43.494290    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:43.494361    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:43.494361    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:43 GMT
	I0318 13:10:43.495047    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:43.984003    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:43.984113    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:43.984113    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:43.984113    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:43.989624    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:10:43.989735    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:43.989735    2404 round_trippers.go:580]     Audit-Id: e4c938be-39b2-487e-b251-9a7198c283bf
	I0318 13:10:43.989735    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:43.989735    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:43.989735    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:43.989735    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:43.989735    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:43 GMT
	I0318 13:10:43.989929    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:43.990786    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:43.990786    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:43.990786    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:43.990786    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:43.993093    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:43.993093    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:43.993774    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:43 GMT
	I0318 13:10:43.993774    2404 round_trippers.go:580]     Audit-Id: 8aa7f6c5-9d4d-4d7b-bd5b-432a177f9a05
	I0318 13:10:43.993774    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:43.993881    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:43.993881    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:43.993881    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:43.993956    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:43.994850    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:44.479327    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:44.479327    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:44.479327    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:44.479327    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:44.483062    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:44.483062    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:44.484093    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:44.484093    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:44.484142    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:44 GMT
	I0318 13:10:44.484142    2404 round_trippers.go:580]     Audit-Id: fd9a6c25-80c1-40cb-ac1d-ccbb829d6a19
	I0318 13:10:44.484142    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:44.484142    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:44.484413    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:44.485368    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:44.485368    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:44.485471    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:44.485471    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:44.487912    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:44.487912    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:44.487912    2404 round_trippers.go:580]     Audit-Id: 3cb33a90-e00d-4cf9-ae1c-1629828c6ecd
	I0318 13:10:44.488864    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:44.488864    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:44.488864    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:44.488864    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:44.488864    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:44 GMT
	I0318 13:10:44.489273    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:44.978731    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:44.978731    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:44.978731    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:44.978810    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:44.987286    2404 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 13:10:44.987286    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:44.987286    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:44.987286    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:44.987286    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:44.987286    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:44 GMT
	I0318 13:10:44.987286    2404 round_trippers.go:580]     Audit-Id: d225c022-38d6-4362-af60-a8f9fd56cfd3
	I0318 13:10:44.987286    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:44.987506    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:44.988185    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:44.988185    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:44.988272    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:44.988272    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:44.990440    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:44.990440    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:44.991486    2404 round_trippers.go:580]     Audit-Id: cd99a42b-a806-4b58-98f9-acc1c4cb7d12
	I0318 13:10:44.991486    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:44.991486    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:44.991486    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:44.991486    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:44.991531    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:44 GMT
	I0318 13:10:44.991774    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:45.482292    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:45.482510    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:45.482510    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:45.482510    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:45.486557    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:45.487443    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:45.487443    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:45.487443    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:45 GMT
	I0318 13:10:45.487443    2404 round_trippers.go:580]     Audit-Id: fe2a362d-6db7-4b5b-8fa3-4d7e96bca914
	I0318 13:10:45.487443    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:45.487538    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:45.487538    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:45.487690    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:45.488312    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:45.488312    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:45.488312    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:45.488312    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:45.491902    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:45.491902    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:45.491902    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:45.491902    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:45.492257    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:45 GMT
	I0318 13:10:45.492257    2404 round_trippers.go:580]     Audit-Id: 067bf348-8605-4e13-8d90-1e5893011f6b
	I0318 13:10:45.492257    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:45.492257    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:45.492389    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:45.986705    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:45.986705    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:45.986705    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:45.986705    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:45.991116    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:45.991116    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:45.991116    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:45.991116    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:45.991116    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:45 GMT
	I0318 13:10:45.991116    2404 round_trippers.go:580]     Audit-Id: 7150c9ca-0eae-4421-83e3-83f94b683298
	I0318 13:10:45.991116    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:45.991116    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:45.991116    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:45.992086    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:45.992209    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:45.992209    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:45.992209    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:45.996415    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:45.996415    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:45.996906    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:45.996906    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:45 GMT
	I0318 13:10:45.996906    2404 round_trippers.go:580]     Audit-Id: f60b9948-446e-4afb-b35c-4993e1ddc35d
	I0318 13:10:45.996906    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:45.996906    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:45.996985    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:45.997094    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:45.997943    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:46.484463    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:46.484463    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:46.484463    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:46.484463    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:46.488896    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:46.489467    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:46.489467    2404 round_trippers.go:580]     Audit-Id: f31b74d7-f902-4f5e-af6d-98b2a04652dc
	I0318 13:10:46.489467    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:46.489467    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:46.489467    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:46.489467    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:46.489467    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:46 GMT
	I0318 13:10:46.489744    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:46.490568    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:46.490645    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:46.490645    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:46.490645    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:46.492965    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:46.492965    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:46.492965    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:46.492965    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:46 GMT
	I0318 13:10:46.492965    2404 round_trippers.go:580]     Audit-Id: 7f05d0ee-1d05-4feb-9de9-d5b12a8611fd
	I0318 13:10:46.492965    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:46.492965    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:46.492965    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:46.492965    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:46.986417    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:46.986681    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:46.986681    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:46.986681    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:46.990614    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:46.991427    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:46.991427    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:46 GMT
	I0318 13:10:46.991427    2404 round_trippers.go:580]     Audit-Id: 416f72d6-81cf-402d-b881-d4ede7dddc62
	I0318 13:10:46.991427    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:46.991427    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:46.991427    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:46.991427    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:46.991427    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:46.992299    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:46.992299    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:46.992299    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:46.992299    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:46.994873    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:46.995640    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:46.995640    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:46 GMT
	I0318 13:10:46.995640    2404 round_trippers.go:580]     Audit-Id: dc744201-3f01-4ebe-8975-a08ad8de9d4f
	I0318 13:10:46.995640    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:46.995640    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:46.995640    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:46.995640    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:46.996010    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:47.487198    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:47.487269    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:47.487269    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:47.487269    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:47.491137    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:47.491757    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:47.491757    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:47.491757    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:47.491757    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:47.491757    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:47.491757    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:47 GMT
	I0318 13:10:47.491757    2404 round_trippers.go:580]     Audit-Id: 41aa1644-7a32-4f69-a2d6-bca860ec8cd1
	I0318 13:10:47.491932    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:47.492807    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:47.492807    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:47.492807    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:47.492807    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:47.495928    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:47.495928    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:47.496028    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:47 GMT
	I0318 13:10:47.496028    2404 round_trippers.go:580]     Audit-Id: cca93d4b-a8e2-4db1-9136-065e3d79ea7e
	I0318 13:10:47.496028    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:47.496028    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:47.496028    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:47.496028    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:47.496686    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:47.985416    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:47.985416    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:47.985416    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:47.985416    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:47.989014    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:47.989014    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:47.989856    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:47.989856    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:47.989856    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:47.989856    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:47.989856    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:47 GMT
	I0318 13:10:47.989856    2404 round_trippers.go:580]     Audit-Id: 3bb56cbc-d9b3-4ca7-b7bc-fcd939b2a8c2
	I0318 13:10:47.990124    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:47.990825    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:47.990890    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:47.990890    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:47.990890    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:47.994211    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:47.994211    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:47.994211    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:47.994295    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:47 GMT
	I0318 13:10:47.994295    2404 round_trippers.go:580]     Audit-Id: f17d0033-23c7-4e7a-ae16-b1d1851ff366
	I0318 13:10:47.994295    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:47.994295    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:47.994295    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:47.994537    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:48.481733    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:48.481733    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:48.481733    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:48.481733    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:48.486541    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:48.486541    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:48.486541    2404 round_trippers.go:580]     Audit-Id: b9ba3fb0-eb92-4ab1-bc13-5ddbf697ffaa
	I0318 13:10:48.486541    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:48.486541    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:48.486541    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:48.486541    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:48.486541    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:48 GMT
	I0318 13:10:48.486815    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:48.488055    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:48.488055    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:48.488055    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:48.488055    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:48.490353    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:48.490353    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:48.490353    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:48.490353    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:48.491127    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:48.491127    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:48.491127    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:48 GMT
	I0318 13:10:48.491127    2404 round_trippers.go:580]     Audit-Id: 8481c207-8c43-48d4-a212-612a02e4ff51
	I0318 13:10:48.491323    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:48.492171    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:48.981996    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:48.982438    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:48.982438    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:48.982438    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:48.986900    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:48.987157    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:48.987157    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:48.987157    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:48 GMT
	I0318 13:10:48.987157    2404 round_trippers.go:580]     Audit-Id: 4f166292-bae5-4a0f-b7b5-9aa0d9a00022
	I0318 13:10:48.987157    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:48.987157    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:48.987157    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:48.987475    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:48.988112    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:48.988112    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:48.988112    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:48.988112    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:48.991201    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:48.991201    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:48.991201    2404 round_trippers.go:580]     Audit-Id: f1a42f28-b083-41d5-a960-43df89aea8c5
	I0318 13:10:48.991201    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:48.991512    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:48.991512    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:48.991512    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:48.991512    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:48 GMT
	I0318 13:10:48.991703    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:49.479688    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:49.479688    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:49.479688    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:49.479688    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:49.483372    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:49.483372    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:49.483372    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:49 GMT
	I0318 13:10:49.483372    2404 round_trippers.go:580]     Audit-Id: fdec251d-7a25-4252-b470-f54142626b0f
	I0318 13:10:49.483372    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:49.483585    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:49.483585    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:49.483585    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:49.483885    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:49.484684    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:49.484755    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:49.484755    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:49.484755    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:49.488241    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:49.488276    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:49.488276    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:49.488276    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:49 GMT
	I0318 13:10:49.488276    2404 round_trippers.go:580]     Audit-Id: 7047bc27-1271-45d5-9026-2b455817166e
	I0318 13:10:49.488276    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:49.488276    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:49.488276    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:49.488276    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:49.979207    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:49.979562    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:49.979562    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:49.979562    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:49.983370    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:49.983627    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:49.983627    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:49.983627    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:49.983627    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:49.983627    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:49.983627    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:49 GMT
	I0318 13:10:49.983627    2404 round_trippers.go:580]     Audit-Id: cdb06616-328c-4438-a770-f4ece0abf438
	I0318 13:10:49.983955    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:49.984636    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:49.984686    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:49.984686    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:49.984686    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:49.987381    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:49.987381    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:49.987381    2404 round_trippers.go:580]     Audit-Id: a95a1922-591a-4847-9566-4a410e54c986
	I0318 13:10:49.987381    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:49.987381    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:49.987808    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:49.987808    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:49.987887    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:49 GMT
	I0318 13:10:49.988124    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:50.481174    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:50.481174    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:50.481174    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:50.481174    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:50.487022    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:10:50.487022    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:50.487022    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:50 GMT
	I0318 13:10:50.487022    2404 round_trippers.go:580]     Audit-Id: 9b33bae2-be39-46e5-aa6f-7983f0bee59e
	I0318 13:10:50.487022    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:50.487022    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:50.487022    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:50.487022    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:50.487022    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:50.487766    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:50.487766    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:50.487766    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:50.487766    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:50.491353    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:50.491353    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:50.491353    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:50.491560    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:50.491560    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:50.491592    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:50 GMT
	I0318 13:10:50.491592    2404 round_trippers.go:580]     Audit-Id: a119ba33-c6ac-4c12-acaa-dbdd26db0a9f
	I0318 13:10:50.491592    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:50.491823    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:50.492249    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:50.982977    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:50.982977    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:50.982977    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:50.982977    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:50.988388    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:10:50.988388    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:50.988388    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:50.988388    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:50.988388    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:50.988388    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:50 GMT
	I0318 13:10:50.988388    2404 round_trippers.go:580]     Audit-Id: abba9391-e84b-4b72-a1a2-217bf675347f
	I0318 13:10:50.988388    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:50.988388    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:50.989106    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:50.989106    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:50.989106    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:50.989106    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:50.993132    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:50.993187    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:50.993187    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:50.993187    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:50.993187    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:50.993187    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:50 GMT
	I0318 13:10:50.993187    2404 round_trippers.go:580]     Audit-Id: c7dfc538-81dd-4929-a20d-ee5739ef3070
	I0318 13:10:50.993187    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:50.995138    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:51.482697    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:51.482697    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:51.482697    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:51.482697    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:51.486369    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:51.486369    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:51.486369    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:51.486369    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:51.486369    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:51.486597    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:51.486597    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:51 GMT
	I0318 13:10:51.486597    2404 round_trippers.go:580]     Audit-Id: ea3ecfc4-3bcb-4fbd-95c1-a1d05d5335e9
	I0318 13:10:51.486754    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:51.487607    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:51.487607    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:51.487607    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:51.487607    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:51.490393    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:51.490656    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:51.490656    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:51.490656    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:51.490656    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:51 GMT
	I0318 13:10:51.490656    2404 round_trippers.go:580]     Audit-Id: e02c38b6-f913-4469-8add-2f8b9c4541a4
	I0318 13:10:51.490656    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:51.490656    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:51.490929    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:51.984433    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:51.984649    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:51.984649    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:51.984705    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:51.994543    2404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 13:10:51.994543    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:51.994543    2404 round_trippers.go:580]     Audit-Id: d257e6ef-6f57-4978-b250-2d5661824f1b
	I0318 13:10:51.994543    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:51.994543    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:51.994543    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:51.994543    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:51.994543    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:51 GMT
	I0318 13:10:51.994543    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:51.995566    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:51.995566    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:51.995566    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:51.995566    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:51.999714    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:51.999714    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:51.999714    2404 round_trippers.go:580]     Audit-Id: 05bfa3dc-9542-47a3-84d2-1131755602be
	I0318 13:10:51.999714    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:52.000175    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:52.000175    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:52.000175    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:52.000175    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:52 GMT
	I0318 13:10:52.000441    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:52.484646    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:52.484896    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:52.484896    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:52.484993    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:52.490330    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:10:52.490330    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:52.490330    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:52.490330    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:52.490330    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:52.490330    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:52.490330    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:52 GMT
	I0318 13:10:52.490330    2404 round_trippers.go:580]     Audit-Id: 7f957243-eb7e-430a-b18d-f247c4a5acf2
	I0318 13:10:52.490964    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:52.491555    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:52.491703    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:52.491703    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:52.491703    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:52.495462    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:52.495579    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:52.495579    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:52 GMT
	I0318 13:10:52.495579    2404 round_trippers.go:580]     Audit-Id: af1634a0-d0a8-4dfd-9db1-e9456c653acf
	I0318 13:10:52.495661    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:52.495661    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:52.495661    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:52.495686    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:52.495686    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:52.496355    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:52.986051    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:52.986051    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:52.986104    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:52.986104    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:52.989713    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:52.990347    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:52.990347    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:52.990347    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:52 GMT
	I0318 13:10:52.990347    2404 round_trippers.go:580]     Audit-Id: 89e19e1b-adf5-4196-801e-2148719a02e1
	I0318 13:10:52.990347    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:52.990347    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:52.990347    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:52.990579    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:52.990828    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:52.990828    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:52.990828    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:52.990828    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:52.994740    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:52.994862    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:52.994862    2404 round_trippers.go:580]     Audit-Id: 187062d9-a47b-4785-94b6-93e4be889a5a
	I0318 13:10:52.994862    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:52.994862    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:52.994862    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:52.994862    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:52.994862    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:52 GMT
	I0318 13:10:52.995002    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:53.478323    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:53.478570    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:53.478570    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:53.478570    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:53.482495    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:53.482495    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:53.482495    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:53 GMT
	I0318 13:10:53.482495    2404 round_trippers.go:580]     Audit-Id: 997a7652-bc70-408d-8da8-480a0ffb3358
	I0318 13:10:53.482495    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:53.482495    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:53.482495    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:53.482495    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:53.486573    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:53.487645    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:53.487645    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:53.487645    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:53.487645    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:53.494256    2404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:10:53.494651    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:53.494651    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:53.494651    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:53.494651    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:53.494651    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:53 GMT
	I0318 13:10:53.494651    2404 round_trippers.go:580]     Audit-Id: b442c57d-7fd3-4000-892b-a0c43383a3ec
	I0318 13:10:53.494718    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:53.494997    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:53.988118    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:53.988118    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:53.988118    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:53.988118    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:53.991757    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:53.991757    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:53.991757    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:53.991757    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:53 GMT
	I0318 13:10:53.991757    2404 round_trippers.go:580]     Audit-Id: 698af6ae-8cb7-41e0-88e5-39406633bffd
	I0318 13:10:53.991757    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:53.991757    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:53.991757    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:53.992858    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1770","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0318 13:10:53.993523    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:53.993523    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:53.993523    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:53.993523    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:54.000911    2404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 13:10:54.000911    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:54.000911    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:54.000911    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:54.000911    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:54.000911    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:54 GMT
	I0318 13:10:54.000911    2404 round_trippers.go:580]     Audit-Id: b2f577a2-e667-4358-9037-072a26bcaf12
	I0318 13:10:54.000911    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:54.000911    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:54.475579    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:54.475795    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:54.475795    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:54.475795    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:54.479835    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:54.479835    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:54.480169    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:54.480169    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:54.480169    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:54.480169    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:54 GMT
	I0318 13:10:54.480169    2404 round_trippers.go:580]     Audit-Id: 63be72fc-f63d-4f4a-9c12-904bc32a0615
	I0318 13:10:54.480169    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:54.480393    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1913","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6723 chars]
	I0318 13:10:54.480688    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:54.480688    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:54.480688    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:54.480688    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:54.485927    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:10:54.486104    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:54.486104    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:54.486104    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:54.486104    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:54.486104    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:54.486104    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:54 GMT
	I0318 13:10:54.486104    2404 round_trippers.go:580]     Audit-Id: 8de6fc85-d5a2-4c00-83e3-42e9438d5461
	I0318 13:10:54.486306    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:54.976168    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:54.976168    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:54.976168    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:54.976168    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:54.980794    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:54.980794    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:54.980794    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:54.980794    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:54.981481    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:54.981481    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:54 GMT
	I0318 13:10:54.981481    2404 round_trippers.go:580]     Audit-Id: 38f77a3c-9187-4702-97c9-02a4ab42e6cd
	I0318 13:10:54.981481    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:54.981844    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1913","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6723 chars]
	I0318 13:10:54.982707    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:54.982707    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:54.982707    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:54.982832    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:54.989214    2404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:10:54.989214    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:54.989214    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:54.989214    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:54 GMT
	I0318 13:10:54.989362    2404 round_trippers.go:580]     Audit-Id: 923f939a-ad16-4dbd-959c-3ed563ff76b0
	I0318 13:10:54.989362    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:54.989362    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:54.989362    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:54.989497    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:54.990389    2404 pod_ready.go:102] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:10:55.475525    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:10:55.475637    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.475637    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.475637    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.479151    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:55.479243    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.479243    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.479243    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.479243    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.479243    2404 round_trippers.go:580]     Audit-Id: f0271434-7a47-46b8-8b34-ca55bca9d828
	I0318 13:10:55.479243    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.479243    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.479456    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1918","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I0318 13:10:55.480196    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:55.480196    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.480196    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.480196    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.483516    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:55.483516    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.483516    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.483873    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.483873    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.483873    2404 round_trippers.go:580]     Audit-Id: 27cbd7a5-2661-4105-a797-0cbce9578172
	I0318 13:10:55.483873    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.483873    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.484322    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:55.484631    2404 pod_ready.go:92] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"True"
	I0318 13:10:55.484631    2404 pod_ready.go:81] duration metric: took 31.5125078s for pod "coredns-5dd5756b68-456tm" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:55.484631    2404 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:55.484631    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-894400
	I0318 13:10:55.484631    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.484631    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.484631    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.488596    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:55.488596    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.488596    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.488596    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.488596    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.488701    2404 round_trippers.go:580]     Audit-Id: 035fe736-4b3b-4d1f-9f00-557ffff7b876
	I0318 13:10:55.488701    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.488701    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.488907    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-894400","namespace":"kube-system","uid":"d4c040b9-a604-4a0d-80ee-7436541af60c","resourceVersion":"1841","creationTimestamp":"2024-03-18T13:09:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.130.156:2379","kubernetes.io/config.hash":"743a549b698f93b8586a236f83c90556","kubernetes.io/config.mirror":"743a549b698f93b8586a236f83c90556","kubernetes.io/config.seen":"2024-03-18T13:09:42.924670260Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:09:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5873 chars]
	I0318 13:10:55.488907    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:55.488907    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.488907    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.489450    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.492734    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:55.492944    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.492944    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.492998    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.492998    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.493023    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.493023    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.493023    2404 round_trippers.go:580]     Audit-Id: b3ef88c7-cb0e-47b0-aad6-1f5ab1482be0
	I0318 13:10:55.493203    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:55.493203    2404 pod_ready.go:92] pod "etcd-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 13:10:55.493203    2404 pod_ready.go:81] duration metric: took 8.572ms for pod "etcd-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:55.493203    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:55.493203    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-894400
	I0318 13:10:55.493203    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.493203    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.493203    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.496151    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:55.497236    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.497236    2404 round_trippers.go:580]     Audit-Id: 9665dc58-22f8-40ff-bc3d-9b3f5ec364c3
	I0318 13:10:55.497236    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.497236    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.497275    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.497275    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.497275    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.497482    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-894400","namespace":"kube-system","uid":"46152b8e-0bda-427e-a1ad-c79506b56763","resourceVersion":"1812","creationTimestamp":"2024-03-18T13:09:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.30.130.156:8443","kubernetes.io/config.hash":"6096c2227c4230453f65f86ebdcd0d95","kubernetes.io/config.mirror":"6096c2227c4230453f65f86ebdcd0d95","kubernetes.io/config.seen":"2024-03-18T13:09:42.869643374Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:09:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7409 chars]
	I0318 13:10:55.497856    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:55.497856    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.497856    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.497856    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.500418    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:55.500418    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.500700    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.500700    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.500700    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.500700    2404 round_trippers.go:580]     Audit-Id: 5a8d3715-1101-4bd7-87f4-ac52c03c1af4
	I0318 13:10:55.500700    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.500700    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.500943    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:55.501505    2404 pod_ready.go:92] pod "kube-apiserver-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 13:10:55.501505    2404 pod_ready.go:81] duration metric: took 8.3024ms for pod "kube-apiserver-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:55.501505    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:55.501505    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-894400
	I0318 13:10:55.501505    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.501505    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.501505    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.506127    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:10:55.506173    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.506173    2404 round_trippers.go:580]     Audit-Id: 54dff443-4397-4b3b-acaa-a35b854ff957
	I0318 13:10:55.506173    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.506173    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.506217    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.506217    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.506217    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.506446    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-894400","namespace":"kube-system","uid":"4ad5fc15-53ba-4ebb-9a63-b8572cd9c834","resourceVersion":"1813","creationTimestamp":"2024-03-18T12:47:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d340aced56ba169ecac1e3ac58ad57fe","kubernetes.io/config.mirror":"d340aced56ba169ecac1e3ac58ad57fe","kubernetes.io/config.seen":"2024-03-18T12:47:20.228444892Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I0318 13:10:55.507248    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:55.507307    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.507307    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.507349    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.510797    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:55.510797    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.510797    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.510797    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.510797    2404 round_trippers.go:580]     Audit-Id: 7ed0993a-1b7d-48dd-a0a8-d499a98730dc
	I0318 13:10:55.510797    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.510797    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.510797    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.511478    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:55.512114    2404 pod_ready.go:92] pod "kube-controller-manager-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 13:10:55.512177    2404 pod_ready.go:81] duration metric: took 10.6717ms for pod "kube-controller-manager-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:55.512210    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-745w9" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:55.512238    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-745w9
	I0318 13:10:55.512238    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.512238    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.512238    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.515601    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:55.515601    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.515601    2404 round_trippers.go:580]     Audit-Id: 33982c52-830a-48f8-b259-ec4be231cafb
	I0318 13:10:55.515601    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.515601    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.515601    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.515601    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.515601    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.515601    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-745w9","generateName":"kube-proxy-","namespace":"kube-system","uid":"d385fe06-f516-440d-b9ed-37c2d4a81050","resourceVersion":"1698","creationTimestamp":"2024-03-18T12:55:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"da714af0-88aa-4fc4-b73d-4de837f06114","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da714af0-88aa-4fc4-b73d-4de837f06114\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5771 chars]
	I0318 13:10:55.516501    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m03
	I0318 13:10:55.516501    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.516501    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.516501    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.519262    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:55.519876    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.519876    2404 round_trippers.go:580]     Audit-Id: 8eb906a0-0bfd-48c7-b6fd-39f5fb7c7362
	I0318 13:10:55.519876    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.519876    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.519876    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.519876    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.519876    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.520009    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m03","uid":"1f8e594e-d4cc-4247-8064-01ac67ea2b15","resourceVersion":"1855","creationTimestamp":"2024-03-18T13:05:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_05_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:05:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4400 chars]
	I0318 13:10:55.520009    2404 pod_ready.go:97] node "multinode-894400-m03" hosting pod "kube-proxy-745w9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400-m03" has status "Ready":"Unknown"
	I0318 13:10:55.520009    2404 pod_ready.go:81] duration metric: took 7.7716ms for pod "kube-proxy-745w9" in "kube-system" namespace to be "Ready" ...
	E0318 13:10:55.520009    2404 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-894400-m03" hosting pod "kube-proxy-745w9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400-m03" has status "Ready":"Unknown"
	I0318 13:10:55.520009    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8bdmn" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:55.677453    2404 request.go:629] Waited for 157.2836ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8bdmn
	I0318 13:10:55.677705    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8bdmn
	I0318 13:10:55.677737    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.677737    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.677737    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.680460    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:55.680460    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.680460    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.680460    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.680460    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.680460    2404 round_trippers.go:580]     Audit-Id: 9814cc38-74d3-4255-8324-fe159f9842aa
	I0318 13:10:55.680460    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.680460    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.681408    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8bdmn","generateName":"kube-proxy-","namespace":"kube-system","uid":"5c266b8a-9665-4365-93c6-2b5f1699d3ef","resourceVersion":"1899","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"da714af0-88aa-4fc4-b73d-4de837f06114","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da714af0-88aa-4fc4-b73d-4de837f06114\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5767 chars]
	I0318 13:10:55.881230    2404 request.go:629] Waited for 198.9733ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:10:55.881442    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:10:55.881442    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:55.881442    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:55.881442    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:55.884293    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:10:55.884293    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:55.885170    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:55.885170    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:55 GMT
	I0318 13:10:55.885170    2404 round_trippers.go:580]     Audit-Id: ea12ee62-e05f-4e03-a0ce-94fe1273c42f
	I0318 13:10:55.885170    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:55.885170    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:55.885170    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:55.885367    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c","resourceVersion":"1905","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T12_50_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4582 chars]
	I0318 13:10:55.886102    2404 pod_ready.go:97] node "multinode-894400-m02" hosting pod "kube-proxy-8bdmn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400-m02" has status "Ready":"Unknown"
	I0318 13:10:55.886186    2404 pod_ready.go:81] duration metric: took 366.1743ms for pod "kube-proxy-8bdmn" in "kube-system" namespace to be "Ready" ...
	E0318 13:10:55.886186    2404 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-894400-m02" hosting pod "kube-proxy-8bdmn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400-m02" has status "Ready":"Unknown"
	I0318 13:10:55.886186    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mc5tv" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:56.083612    2404 request.go:629] Waited for 197.3426ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mc5tv
	I0318 13:10:56.083973    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mc5tv
	I0318 13:10:56.083973    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:56.083973    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:56.083973    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:56.087825    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:56.087825    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:56.087825    2404 round_trippers.go:580]     Audit-Id: 36f65f48-eb19-4188-88d0-c583c4085517
	I0318 13:10:56.087825    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:56.087825    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:56.088213    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:56.088213    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:56.088213    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:56 GMT
	I0318 13:10:56.088387    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mc5tv","generateName":"kube-proxy-","namespace":"kube-system","uid":"0afe25f8-cbd6-412b-8698-7b547d1d49ca","resourceVersion":"1799","creationTimestamp":"2024-03-18T12:47:41Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"da714af0-88aa-4fc4-b73d-4de837f06114","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da714af0-88aa-4fc4-b73d-4de837f06114\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0318 13:10:56.276477    2404 request.go:629] Waited for 187.7155ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:56.276732    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:56.276732    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:56.276732    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:56.276732    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:56.280198    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:56.280198    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:56.280198    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:56.280198    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:56.280198    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:56.280198    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:56 GMT
	I0318 13:10:56.280198    2404 round_trippers.go:580]     Audit-Id: 8f1f28af-ac20-43b5-9adc-8dacaa1b5a8e
	I0318 13:10:56.280198    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:56.280812    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:56.281520    2404 pod_ready.go:92] pod "kube-proxy-mc5tv" in "kube-system" namespace has status "Ready":"True"
	I0318 13:10:56.281520    2404 pod_ready.go:81] duration metric: took 395.2489ms for pod "kube-proxy-mc5tv" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:56.281520    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:56.479429    2404 request.go:629] Waited for 197.5811ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-894400
	I0318 13:10:56.479761    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-894400
	I0318 13:10:56.479761    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:56.479761    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:56.479761    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:56.484093    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:56.484093    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:56.484093    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:56 GMT
	I0318 13:10:56.484093    2404 round_trippers.go:580]     Audit-Id: e3ec9787-0852-4fcd-b441-fb6669f87e1a
	I0318 13:10:56.484093    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:56.484231    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:56.484231    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:56.484231    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:56.484271    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-894400","namespace":"kube-system","uid":"f47703ce-5a82-466e-ac8e-ef6b8cc07e6c","resourceVersion":"1822","creationTimestamp":"2024-03-18T12:47:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1c745e9b917877b1ff3c90ed02e9a79a","kubernetes.io/config.mirror":"1c745e9b917877b1ff3c90ed02e9a79a","kubernetes.io/config.seen":"2024-03-18T12:47:28.428225123Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I0318 13:10:56.680621    2404 request.go:629] Waited for 195.4882ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:56.680621    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:10:56.680621    2404 round_trippers.go:469] Request Headers:
	I0318 13:10:56.680621    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:10:56.680621    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:10:56.684043    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:10:56.684043    2404 round_trippers.go:577] Response Headers:
	I0318 13:10:56.684043    2404 round_trippers.go:580]     Audit-Id: 1f703bdc-e807-4c16-8593-c2616586cddc
	I0318 13:10:56.684043    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:10:56.684043    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:10:56.684043    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:10:56.684747    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:10:56.684747    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:10:56 GMT
	I0318 13:10:56.684963    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:10:56.685557    2404 pod_ready.go:92] pod "kube-scheduler-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 13:10:56.685557    2404 pod_ready.go:81] duration metric: took 404.0343ms for pod "kube-scheduler-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:10:56.685557    2404 pod_ready.go:38] duration metric: took 32.7242792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:10:56.685692    2404 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:10:56.694984    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:10:56.719135    2404 command_runner.go:130] > fc4430c7fa20
	I0318 13:10:56.720281    2404 logs.go:276] 1 containers: [fc4430c7fa20]
	I0318 13:10:56.730238    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:10:56.757930    2404 command_runner.go:130] > 5f0887d1e691
	I0318 13:10:56.758866    2404 logs.go:276] 1 containers: [5f0887d1e691]
	I0318 13:10:56.766876    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:10:56.791210    2404 command_runner.go:130] > 3c3bc988c74c
	I0318 13:10:56.791210    2404 command_runner.go:130] > 693a64f7472f
	I0318 13:10:56.791284    2404 logs.go:276] 2 containers: [3c3bc988c74c 693a64f7472f]
	I0318 13:10:56.800400    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:10:56.824934    2404 command_runner.go:130] > 66ee8be9fada
	I0318 13:10:56.824934    2404 command_runner.go:130] > e4d42739ce0e
	I0318 13:10:56.824934    2404 logs.go:276] 2 containers: [66ee8be9fada e4d42739ce0e]
	I0318 13:10:56.837855    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:10:56.858407    2404 command_runner.go:130] > 163ccabc3882
	I0318 13:10:56.858407    2404 command_runner.go:130] > 9335855aab63
	I0318 13:10:56.858407    2404 logs.go:276] 2 containers: [163ccabc3882 9335855aab63]
	I0318 13:10:56.866409    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:10:56.888433    2404 command_runner.go:130] > 4ad6784a187d
	I0318 13:10:56.888433    2404 command_runner.go:130] > 7aa5cf4ec378
	I0318 13:10:56.888433    2404 logs.go:276] 2 containers: [4ad6784a187d 7aa5cf4ec378]
	I0318 13:10:56.896413    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:10:56.919614    2404 command_runner.go:130] > c8e5ec25e910
	I0318 13:10:56.919614    2404 command_runner.go:130] > c4d7018ad23a
	I0318 13:10:56.919784    2404 logs.go:276] 2 containers: [c8e5ec25e910 c4d7018ad23a]
	I0318 13:10:56.919784    2404 logs.go:123] Gathering logs for etcd [5f0887d1e691] ...
	I0318 13:10:56.919848    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0887d1e691"
	I0318 13:10:56.946988    2404 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T13:09:44.778754Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 13:10:56.947270    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.779618Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.30.130.156:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.30.130.156:2380","--initial-cluster=multinode-894400=https://172.30.130.156:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.30.130.156:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.30.130.156:2380","--name=multinode-894400","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0318 13:10:56.947270    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.780287Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0318 13:10:56.947378    2404 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T13:09:44.780316Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 13:10:56.947378    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.780326Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.30.130.156:2380"]}
	I0318 13:10:56.947460    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.780518Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 13:10:56.947460    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.782775Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.30.130.156:2379"]}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.785511Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-894400","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.30.130.156:2380"],"listen-peer-urls":["https://172.30.130.156:2380"],"advertise-client-urls":["https://172.30.130.156:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.30.130.156:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"init
ial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.809621Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"22.951578ms"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.849189Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.872854Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"2db881e830cc2153","local-member-id":"c2557cd98fa8d31a","commit-index":1981}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.87358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a switched to configuration voters=()"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.873736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became follower at term 2"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.873929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft c2557cd98fa8d31a [peers: [], term: 2, commit: 1981, applied: 0, lastindex: 1981, lastterm: 2]"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T13:09:44.887865Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.892732Z","caller":"mvcc/kvstore.go:323","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1376}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.89955Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1715}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.914592Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.926835Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"c2557cd98fa8d31a","timeout":"7s"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.928545Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"c2557cd98fa8d31a"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.930225Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"c2557cd98fa8d31a","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.930859Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.931762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a switched to configuration voters=(14003235890238378778)"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.932128Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2db881e830cc2153","local-member-id":"c2557cd98fa8d31a","added-peer-id":"c2557cd98fa8d31a","added-peer-peer-urls":["https://172.30.129.141:2380"]}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.933388Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2db881e830cc2153","local-member-id":"c2557cd98fa8d31a","cluster-version":"3.5"}
	I0318 13:10:56.947563    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.933717Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0318 13:10:56.948095    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.946226Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0318 13:10:56.948149    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.947818Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0318 13:10:56.948149    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.948803Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0318 13:10:56.948287    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.954567Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 13:10:56.948419    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.954988Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"c2557cd98fa8d31a","initial-advertise-peer-urls":["https://172.30.130.156:2380"],"listen-peer-urls":["https://172.30.130.156:2380"],"advertise-client-urls":["https://172.30.130.156:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.30.130.156:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0318 13:10:56.948515    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.955173Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0318 13:10:56.948515    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.954599Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.30.130.156:2380"}
	I0318 13:10:56.948577    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.956126Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.30.130.156:2380"}
	I0318 13:10:56.948577    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a is starting a new election at term 2"}
	I0318 13:10:56.948632    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became pre-candidate at term 2"}
	I0318 13:10:56.948658    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a received MsgPreVoteResp from c2557cd98fa8d31a at term 2"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became candidate at term 3"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.77574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a received MsgVoteResp from c2557cd98fa8d31a at term 3"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became leader at term 3"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c2557cd98fa8d31a elected leader c2557cd98fa8d31a at term 3"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.782683Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c2557cd98fa8d31a","local-member-attributes":"{Name:multinode-894400 ClientURLs:[https://172.30.130.156:2379]}","request-path":"/0/members/c2557cd98fa8d31a/attributes","cluster-id":"2db881e830cc2153","publish-timeout":"7s"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.78269Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.782706Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.783976Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.30.130.156:2379"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.783993Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.788664Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0318 13:10:56.948688    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.788817Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0318 13:10:56.955141    2404 logs.go:123] Gathering logs for kube-proxy [163ccabc3882] ...
	I0318 13:10:56.955141    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 163ccabc3882"
	I0318 13:10:56.977436    2404 command_runner.go:130] ! I0318 13:09:50.786718       1 server_others.go:69] "Using iptables proxy"
	I0318 13:10:56.977584    2404 command_runner.go:130] ! I0318 13:09:50.833991       1 node.go:141] Successfully retrieved node IP: 172.30.130.156
	I0318 13:10:56.977584    2404 command_runner.go:130] ! I0318 13:09:50.913665       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:10:56.977584    2404 command_runner.go:130] ! I0318 13:09:50.913704       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:10:56.977584    2404 command_runner.go:130] ! I0318 13:09:50.924640       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.925588       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.926722       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.926981       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.938764       1 config.go:188] "Starting service config controller"
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.949206       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.949220       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.953299       1 config.go:315] "Starting node config controller"
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.979020       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.990249       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.958488       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:50.996356       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:10:56.977650    2404 command_runner.go:130] ! I0318 13:09:51.051947       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:10:56.980209    2404 logs.go:123] Gathering logs for kindnet [c4d7018ad23a] ...
	I0318 13:10:56.980209    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d7018ad23a"
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:20.031595       1 main.go:227] handling current node
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:20.031610       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:20.031618       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:20.031800       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:20.031837       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:30.038705       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:30.038812       1 main.go:227] handling current node
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:30.038826       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:30.038833       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:30.039027       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:30.039347       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:40.051950       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:40.052053       1 main.go:227] handling current node
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:40.052086       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:40.052204       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:40.052568       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:40.052681       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:50.074059       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:50.074164       1 main.go:227] handling current node
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:50.074183       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:50.074192       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:50.075009       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:56:50.075306       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:00.089286       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:00.089382       1 main.go:227] handling current node
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:00.089397       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:00.089405       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:00.089918       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:00.089934       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:10.103457       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:10.103575       1 main.go:227] handling current node
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:10.103607       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:10.103704       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:10.104106       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:10.104144       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:20.111225       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:20.111346       1 main.go:227] handling current node
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:20.111360       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:20.111367       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:20.111695       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:20.111775       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:30.124283       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:30.124477       1 main.go:227] handling current node
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:30.124495       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:30.124505       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.018151    2404 command_runner.go:130] ! I0318 12:57:30.125279       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:30.125393       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:40.137523       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:40.137766       1 main.go:227] handling current node
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:40.137807       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:40.137833       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:40.137998       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:40.138087       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:50.149548       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:50.149697       1 main.go:227] handling current node
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:50.149712       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:50.149720       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:50.150251       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:57:50.150344       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:00.159094       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:00.159284       1 main.go:227] handling current node
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:00.159340       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:00.159700       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:00.160303       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:00.160346       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:10.177603       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:10.177780       1 main.go:227] handling current node
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:10.178122       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:10.178166       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:10.178455       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:10.178497       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:20.196110       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:20.196144       1 main.go:227] handling current node
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:20.196236       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:20.196542       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:20.196774       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:20.196867       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:30.204485       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:30.204515       1 main.go:227] handling current node
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:30.204528       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:30.204556       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:30.204856       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:30.205022       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:40.221076       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:40.221184       1 main.go:227] handling current node
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:40.221201       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:40.221210       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:40.221741       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:40.221769       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:50.229210       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:50.229302       1 main.go:227] handling current node
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:50.229317       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:50.229324       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:50.229703       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:58:50.229807       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:00.244905       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:00.244992       1 main.go:227] handling current node
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:00.245007       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:00.245033       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:00.245480       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:00.245600       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:10.253460       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:10.253563       1 main.go:227] handling current node
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:10.253579       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:10.253605       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.019230    2404 command_runner.go:130] ! I0318 12:59:10.254199       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:10.254310       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:20.270774       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:20.270870       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:20.270886       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:20.270894       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:20.271275       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:20.271367       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:30.281784       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:30.281809       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:30.281819       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:30.281824       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:30.282361       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:30.282392       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:40.291176       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:40.291304       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:40.291321       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:40.291328       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:40.291827       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:40.291857       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:50.303374       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:50.303454       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:50.303468       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:50.303476       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:50.303974       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 12:59:50.304002       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:00.311317       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:00.311423       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:00.311441       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:00.311449       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:00.312039       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:00.312135       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:10.324823       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:10.324902       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:10.324915       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:10.324926       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:10.325084       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:10.325108       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:20.338195       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:20.338297       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:20.338312       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:20.338320       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:20.338525       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:20.338601       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:30.345095       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:30.345184       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:30.345198       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:30.345205       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:30.346074       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:30.346194       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:40.357007       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:40.357386       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:40.357485       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:40.357513       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:40.357737       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:40.357766       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:50.372182       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:50.372221       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:50.372235       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:50.372242       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:50.372608       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:00:50.372772       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:01:00.386990       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:01:00.387036       1 main.go:227] handling current node
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:01:00.387050       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:01:00.387058       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:01:00.387182       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:01:00.387191       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.020163    2404 command_runner.go:130] ! I0318 13:01:10.396889       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:10.396930       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:10.396942       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:10.396948       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:10.397250       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:10.397343       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:20.413272       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:20.413371       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:20.413386       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:20.413395       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:20.413968       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:20.413999       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:30.429160       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:30.429478       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:30.429549       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:30.429678       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:30.429960       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:30.430034       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:40.436733       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:40.436839       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:40.436901       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:40.436930       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:40.437399       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:40.437431       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:50.451622       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:50.451802       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:50.451849       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:50.451860       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:50.452021       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:01:50.452171       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:00.460452       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:00.460548       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:00.460563       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:00.460571       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:00.461181       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:00.461333       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:10.474274       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:10.474396       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:10.474427       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:10.474436       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:10.475019       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:10.475159       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:20.489442       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:20.489616       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:20.489699       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:20.489752       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:20.490046       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:20.490082       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:30.497474       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:30.497574       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:30.497589       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:30.497597       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:30.498279       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:30.498361       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:40.512026       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:40.512345       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:40.512385       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:40.512477       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:40.512786       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:40.512873       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:50.520110       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:50.520239       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:50.520254       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:50.520263       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:50.520784       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:02:50.520861       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:03:00.531866       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:03:00.531958       1 main.go:227] handling current node
	I0318 13:10:57.021159    2404 command_runner.go:130] ! I0318 13:03:00.531972       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:00.531979       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:00.532211       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:00.532293       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:10.543869       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:10.543913       1 main.go:227] handling current node
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:10.543926       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:10.543933       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:10.544294       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:10.544430       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:20.558742       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:20.558782       1 main.go:227] handling current node
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:20.558795       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:20.558802       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:20.558992       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:20.559009       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:30.568771       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:30.568872       1 main.go:227] handling current node
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:30.568905       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:30.568996       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:30.569367       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:30.569450       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:40.587554       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:40.587674       1 main.go:227] handling current node
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:40.588337       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:40.588356       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:40.588758       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:40.588836       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:50.596331       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:50.596438       1 main.go:227] handling current node
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:50.596453       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:50.596462       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:50.596942       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:03:50.597079       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:00.611242       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:00.611383       1 main.go:227] handling current node
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:00.611397       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:00.611405       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:00.611541       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:00.611572       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:10.624814       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:10.624904       1 main.go:227] handling current node
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:10.624920       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:10.624927       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:10.625504       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:10.625547       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:20.640319       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:20.640364       1 main.go:227] handling current node
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:20.640379       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:20.640386       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:20.640865       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:20.640901       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:30.648021       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:30.648134       1 main.go:227] handling current node
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:30.648148       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:30.648156       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:30.648313       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:30.648344       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:40.663577       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:40.663749       1 main.go:227] handling current node
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:40.663765       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:40.663774       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:40.663896       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:40.663929       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.022148    2404 command_runner.go:130] ! I0318 13:04:50.669717       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:04:50.669791       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:04:50.669805       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:04:50.669812       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:04:50.670128       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:04:50.670230       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:00.686596       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:00.686809       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:00.686942       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:00.687116       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:00.687370       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:00.687441       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:10.704297       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:10.704404       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:10.704426       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:10.704555       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:10.704810       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:10.704878       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:20.722958       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:20.723127       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:20.723145       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:20.723159       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:30.731764       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:30.731841       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:30.731854       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:30.731861       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:30.732029       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:30.732163       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:30.732544       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.30.137.140 Flags: [] Table: 0} 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:40.739849       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:40.739939       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:40.739953       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:40.739960       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:40.740081       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:40.740151       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:50.748036       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:50.748465       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:50.748942       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:50.749055       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:50.749287       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:05:50.749413       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:00.757350       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:00.757434       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:00.757452       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:00.757460       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:00.757853       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:00.758194       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:10.766768       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:10.766886       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:10.766901       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:10.766910       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:10.767143       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:10.767175       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:20.773530       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:20.773656       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:20.773729       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:20.773741       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:20.774155       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:20.774478       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:30.792219       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:30.792349       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:30.792364       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:30.792373       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:30.792864       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:30.792901       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:40.809219       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:40.809451       1 main.go:227] handling current node
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:40.809484       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.023146    2404 command_runner.go:130] ! I0318 13:06:40.809508       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:06:40.809841       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:06:40.810075       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:06:50.822556       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:06:50.822612       1 main.go:227] handling current node
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:06:50.822667       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:06:50.822680       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:06:50.822925       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:06:50.823171       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:00.837923       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:00.838008       1 main.go:227] handling current node
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:00.838022       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:00.838030       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:00.838429       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:00.838666       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:10.854207       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:10.854411       1 main.go:227] handling current node
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:10.854444       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:10.854469       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:10.854879       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:10.855094       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:20.861534       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:20.861671       1 main.go:227] handling current node
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:20.861685       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:20.861692       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:20.861818       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.024143    2404 command_runner.go:130] ! I0318 13:07:20.861845       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.043569    2404 logs.go:123] Gathering logs for Docker ...
	I0318 13:10:57.043569    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:10:57.073951    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:10:57.074760    2404 command_runner.go:130] > Mar 18 13:08:19 minikube cri-dockerd[222]: time="2024-03-18T13:08:19Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:10:57.074875    2404 command_runner.go:130] > Mar 18 13:08:19 minikube cri-dockerd[222]: time="2024-03-18T13:08:19Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:10:57.074875    2404 command_runner.go:130] > Mar 18 13:08:19 minikube cri-dockerd[222]: time="2024-03-18T13:08:19Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 13:10:57.074978    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:10:57.074978    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:10:57.075019    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:10:57.075019    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0318 13:10:57.075105    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 13:10:57.075105    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:10:57.075105    2404 command_runner.go:130] > Mar 18 13:08:22 minikube cri-dockerd[412]: time="2024-03-18T13:08:22Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:10:57.075184    2404 command_runner.go:130] > Mar 18 13:08:22 minikube cri-dockerd[412]: time="2024-03-18T13:08:22Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:10:57.075229    2404 command_runner.go:130] > Mar 18 13:08:22 minikube cri-dockerd[412]: time="2024-03-18T13:08:22Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 13:10:57.075229    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:10:57.075285    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:10:57.075331    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:10:57.075350    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0318 13:10:57.075428    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 13:10:57.075428    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:10:57.075457    2404 command_runner.go:130] > Mar 18 13:08:24 minikube cri-dockerd[434]: time="2024-03-18T13:08:24Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:10:57.075492    2404 command_runner.go:130] > Mar 18 13:08:24 minikube cri-dockerd[434]: time="2024-03-18T13:08:24Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:10:57.075525    2404 command_runner.go:130] > Mar 18 13:08:24 minikube cri-dockerd[434]: time="2024-03-18T13:08:24Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 13:10:57.075542    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:10:57.075542    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:10:57.075542    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:10:57.075542    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0318 13:10:57.075601    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 13:10:57.075624    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0318 13:10:57.075624    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:10:57.075624    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:10:57.075624    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 systemd[1]: Starting Docker Application Container Engine...
	I0318 13:10:57.075684    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[662]: time="2024-03-18T13:09:08.926008208Z" level=info msg="Starting up"
	I0318 13:10:57.075717    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[662]: time="2024-03-18T13:09:08.927042019Z" level=info msg="containerd not running, starting managed containerd"
	I0318 13:10:57.075742    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[662]: time="2024-03-18T13:09:08.928263831Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	I0318 13:10:57.075816    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.958180831Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 13:10:57.075816    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.981644866Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 13:10:57.075874    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.981729667Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 13:10:57.075907    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.981890169Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 13:10:57.075907    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.982007470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.075947    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.982683977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:10:57.075981    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.982866878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.075981    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983040880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:10:57.076052    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983180882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.076108    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983201082Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 13:10:57.076108    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983210682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.076141    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983772288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.076182    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.984603896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.076217    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987157222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:10:57.076272    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987245222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.076272    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987380024Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:10:57.076327    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987459025Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 13:10:57.076389    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.988076231Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 13:10:57.076446    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.988215332Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 13:10:57.076446    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.988231932Z" level=info msg="metadata content store policy set" policy=shared
	I0318 13:10:57.076505    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994386894Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 13:10:57.076505    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994536096Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 13:10:57.076505    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994574296Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 13:10:57.076560    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994587696Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 13:10:57.076560    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994605296Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 13:10:57.076560    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994669597Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 13:10:57.076618    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995239203Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 13:10:57.076691    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995378304Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 13:10:57.076723    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995441205Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 13:10:57.076723    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995564406Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995751508Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995819808Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995841009Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995857509Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995870509Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995903509Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995925809Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995942710Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995963610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995980410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996091811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996121511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996134612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996151212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996165012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996179412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996194912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996291913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996404914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996427114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996445915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996468515Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996497915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996538416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996560016Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997036721Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 13:10:57.076755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997287923Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 13:10:57.077280    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997398924Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 13:10:57.077280    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997518125Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 13:10:57.077318    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.998045931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.998612736Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.998643637Z" level=info msg="NRI interface is disabled by configuration."
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999395544Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999606346Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999683147Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999765648Z" level=info msg="containerd successfully booted in 0.044672s"
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:09 multinode-894400 dockerd[662]: time="2024-03-18T13:09:09.982989696Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.138351976Z" level=info msg="Loading containers: start."
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.545129368Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.626119356Z" level=info msg="Loading containers: done."
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.653541890Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.654242899Z" level=info msg="Daemon has completed initialization"
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.702026381Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 systemd[1]: Started Docker Application Container Engine.
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.704980317Z" level=info msg="API listen on [::]:2376"
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 systemd[1]: Stopping Docker Application Container Engine...
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.118112316Z" level=info msg="Processing signal 'terminated'"
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120561724Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120708425Z" level=info msg="Daemon shutdown complete"
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120817525Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120965826Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 systemd[1]: docker.service: Deactivated successfully.
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 systemd[1]: Stopped Docker Application Container Engine.
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 systemd[1]: Starting Docker Application Container Engine...
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:36.188961030Z" level=info msg="Starting up"
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:36.190214934Z" level=info msg="containerd not running, starting managed containerd"
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:36.191301438Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1058
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.220111635Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244480717Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 13:10:57.077351    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244510717Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 13:10:57.077868    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244539917Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 13:10:57.077868    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244552117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.077939    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244588817Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:10:57.077939    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244601217Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.077939    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244707818Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:10:57.077939    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244791318Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.078055    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244809418Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 13:10:57.078055    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244818018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.078100    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244838218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.078117    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244975219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.078117    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248195830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:10:57.078172    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248302930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248446530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248548631Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248576331Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248593831Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248604331Z" level=info msg="metadata content store policy set" policy=shared
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.249888435Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.249971436Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.250624738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.250745538Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.250859739Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.251093339Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252590644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252685145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252703545Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252722945Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252736845Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252749745Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252793045Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252998846Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253020946Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253065546Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253080846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253090746Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 13:10:57.078216    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253177146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078738    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253201547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078773    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253215147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078773    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253229847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078773    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253243047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253257847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253270347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253284147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253297547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253313047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253331047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253344647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253357947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253374747Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253395147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253407847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253420947Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253503448Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253519848Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253532848Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253542748Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253613548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253652648Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253668048Z" level=info msg="NRI interface is disabled by configuration."
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254026949Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254474051Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254684152Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254775452Z" level=info msg="containerd successfully booted in 0.035926s"
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.234846559Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 13:10:57.078864    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.265734263Z" level=info msg="Loading containers: start."
	I0318 13:10:57.079389    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.543045299Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 13:10:57.079389    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.620368360Z" level=info msg="Loading containers: done."
	I0318 13:10:57.079424    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.642056833Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 13:10:57.079424    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.642227734Z" level=info msg="Daemon has completed initialization"
	I0318 13:10:57.079424    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.686175082Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 13:10:57.079424    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 systemd[1]: Started Docker Application Container Engine.
	I0318 13:10:57.079424    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.687135485Z" level=info msg="API listen on [::]:2376"
	I0318 13:10:57.079542    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:10:57.079542    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:10:57.079542    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:10:57.079600    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0318 13:10:57.079622    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Loaded network plugin cni"
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Docker Info: &{ID:5695bce5-a75b-48a7-87b1-d9b6b787473a Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2024-03-18T13:09:38.671342607Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00034fe30 NCPU:2 MemTotal:2216210432 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-894400 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Start cri-dockerd grpc backend"
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:43Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-456tm_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"d001e299e996b74f090c73f4e98605ef0d323a96826d23424e724ca8e7fe466a\""
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:43Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-5b5d89c9d6-c2997_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a23c1189be7c39f22c8a59ba41b198519894c9783ecad8dcda79fb8a76f05254\""
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791205184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791356085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791396985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791577685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.079647    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838312843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.080188    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838494344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.080243    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838510044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080243    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838727044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080243    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951016023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.080359    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951141424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.080359    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951152624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951369125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/066206d4c52cb784fe7c2001b5e196c6e3521560c412808e8d9ddf742aa008e4/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.020194457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.020684858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.023241167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.023675469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc7236a19957e321c1961c944824f2b4624bd7a289ab4ecefe33a08d4af88e2b/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6fb3325d3c1005ffbbbfe7b136924ed5ff0c71db51f79a50f7179c108c238d47/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/354f3c44a34fcbd9ca7826066f2b4c40b3a6551aa632389815c5d24e85e0472b/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396374926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396436126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396447326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396626927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.467642467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.467879868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.468180469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.468559970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476573097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476618697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476631197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476702797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.080412    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482324416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.080944    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482501517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.080944    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482648417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081019    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482918618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081019    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:48Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0318 13:10:57.081118    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.545677603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.081118    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.548609313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.081158    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.548646013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081193    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.549168715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081259    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592129660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.081259    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592185160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.081259    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592195760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081386    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592280460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081421    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.615117337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.081448    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.615393238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.081471    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.615610139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081471    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.621669759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081537    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a9f21749669fe37a2d5bee9e7f45bf84eca6ae55870de8ac18bc774db7f81643/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:10:57.081563    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/41035eff3b7db97e56ba25be141bf644acea4ddcb9d92ba3b421e6fa855a09af/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:10:57.081563    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.995795822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.081658    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.995895422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.081701    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.995916522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081701    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.996021523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081787    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/86d74dec812cfd4842de7473d45ff320bfb2283d09d2d05eee29e45cbde9f5e9/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:10:57.081813    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171141514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.081813    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171335814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.081813    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171461415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081916    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171764216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.081916    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.391481057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.081916    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.391826158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.082048    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.391990059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082048    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.393600364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082048    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1052]: time="2024-03-18T13:10:20.550892922Z" level=info msg="ignoring event" container=46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0318 13:10:57.082120    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:20.551487227Z" level=info msg="shim disconnected" id=46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264 namespace=moby
	I0318 13:10:57.082197    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:20.551627628Z" level=warning msg="cleaning up after shim disconnected" id=46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264 namespace=moby
	I0318 13:10:57.082252    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:20.551639828Z" level=info msg="cleaning up dead shim" namespace=moby
	I0318 13:10:57.082252    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.200900512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.082252    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.202882722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.082315    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.203198024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.203763327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.250783392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.252016097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.252234698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.252566299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259013124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259187125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259204725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259319625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:10:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/97583cc14f115cf8a4e90889b5f2beda90a81f97fd592e5e5acff8d35e305a59/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:10:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e20878b8092c291820adeb66f1b491dcef85c0699c57800cced7d3530d2a07fb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.818847676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.818997976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.819021476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.819463578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825706506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825766006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825780706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825864707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:56 multinode-894400 dockerd[1052]: 2024/03/18 13:10:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:10:57.082377    2404 command_runner.go:130] > Mar 18 13:10:56 multinode-894400 dockerd[1052]: 2024/03/18 13:10:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:10:57.082943    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:10:57.112261    2404 logs.go:123] Gathering logs for kube-apiserver [fc4430c7fa20] ...
	I0318 13:10:57.112261    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc4430c7fa20"
	I0318 13:10:57.138280    2404 command_runner.go:130] ! I0318 13:09:45.117348       1 options.go:220] external host was not specified, using 172.30.130.156
	I0318 13:10:57.138280    2404 command_runner.go:130] ! I0318 13:09:45.120803       1 server.go:148] Version: v1.28.4
	I0318 13:10:57.139198    2404 command_runner.go:130] ! I0318 13:09:45.120988       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:57.139198    2404 command_runner.go:130] ! I0318 13:09:45.770080       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0318 13:10:57.139198    2404 command_runner.go:130] ! I0318 13:09:45.795010       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0318 13:10:57.139334    2404 command_runner.go:130] ! I0318 13:09:45.795318       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0318 13:10:57.139334    2404 command_runner.go:130] ! I0318 13:09:45.795878       1 instance.go:298] Using reconciler: lease
	I0318 13:10:57.139334    2404 command_runner.go:130] ! I0318 13:09:46.836486       1 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0318 13:10:57.139395    2404 command_runner.go:130] ! W0318 13:09:46.836605       1 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.139395    2404 command_runner.go:130] ! I0318 13:09:47.074638       1 handler.go:232] Adding GroupVersion  v1 to ResourceManager
	I0318 13:10:57.139395    2404 command_runner.go:130] ! I0318 13:09:47.074978       1 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0318 13:10:57.139480    2404 command_runner.go:130] ! I0318 13:09:47.452713       1 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
	I0318 13:10:57.139480    2404 command_runner.go:130] ! I0318 13:09:47.465860       1 handler.go:232] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0318 13:10:57.139480    2404 command_runner.go:130] ! W0318 13:09:47.465973       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.139480    2404 command_runner.go:130] ! W0318 13:09:47.465981       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0318 13:10:57.139555    2404 command_runner.go:130] ! I0318 13:09:47.466706       1 handler.go:232] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.466787       1 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! I0318 13:09:47.467862       1 handler.go:232] Adding GroupVersion autoscaling v2 to ResourceManager
	I0318 13:10:57.139580    2404 command_runner.go:130] ! I0318 13:09:47.468840       1 handler.go:232] Adding GroupVersion autoscaling v1 to ResourceManager
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.468926       1 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.468934       1 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! I0318 13:09:47.470928       1 handler.go:232] Adding GroupVersion batch v1 to ResourceManager
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.471074       1 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! I0318 13:09:47.472121       1 handler.go:232] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.472195       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.472202       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! I0318 13:09:47.472773       1 handler.go:232] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.472852       1 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.472898       1 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! I0318 13:09:47.473727       1 handler.go:232] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0318 13:10:57.139580    2404 command_runner.go:130] ! I0318 13:09:47.476475       1 handler.go:232] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.476612       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.476620       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! I0318 13:09:47.477234       1 handler.go:232] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.477314       1 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.477321       1 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0318 13:10:57.139580    2404 command_runner.go:130] ! I0318 13:09:47.478143       1 handler.go:232] Adding GroupVersion policy v1 to ResourceManager
	I0318 13:10:57.139580    2404 command_runner.go:130] ! W0318 13:09:47.478217       1 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
	I0318 13:10:57.140111    2404 command_runner.go:130] ! I0318 13:09:47.480195       1 handler.go:232] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0318 13:10:57.140165    2404 command_runner.go:130] ! W0318 13:09:47.480271       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.140165    2404 command_runner.go:130] ! W0318 13:09:47.480279       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0318 13:10:57.140165    2404 command_runner.go:130] ! I0318 13:09:47.480731       1 handler.go:232] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0318 13:10:57.140165    2404 command_runner.go:130] ! W0318 13:09:47.480812       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.140165    2404 command_runner.go:130] ! W0318 13:09:47.480819       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0318 13:10:57.140165    2404 command_runner.go:130] ! I0318 13:09:47.493837       1 handler.go:232] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0318 13:10:57.140284    2404 command_runner.go:130] ! W0318 13:09:47.494098       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.140284    2404 command_runner.go:130] ! W0318 13:09:47.494198       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0318 13:10:57.140284    2404 command_runner.go:130] ! I0318 13:09:47.499689       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0318 13:10:57.140341    2404 command_runner.go:130] ! I0318 13:09:47.506631       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
	I0318 13:10:57.140359    2404 command_runner.go:130] ! W0318 13:09:47.506664       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.140359    2404 command_runner.go:130] ! W0318 13:09:47.506671       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0318 13:10:57.140359    2404 command_runner.go:130] ! I0318 13:09:47.512288       1 handler.go:232] Adding GroupVersion apps v1 to ResourceManager
	I0318 13:10:57.140419    2404 command_runner.go:130] ! W0318 13:09:47.512371       1 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
	I0318 13:10:57.140442    2404 command_runner.go:130] ! W0318 13:09:47.512378       1 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:47.513443       1 handler.go:232] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0318 13:10:57.140470    2404 command_runner.go:130] ! W0318 13:09:47.513547       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.140470    2404 command_runner.go:130] ! W0318 13:09:47.513557       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:47.514339       1 handler.go:232] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0318 13:10:57.140470    2404 command_runner.go:130] ! W0318 13:09:47.514435       1 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:47.536002       1 handler.go:232] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0318 13:10:57.140470    2404 command_runner.go:130] ! W0318 13:09:47.536061       1 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.221475       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.221960       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.222438       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.222942       1 secure_serving.go:213] Serving securely on [::]:8443
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.223022       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.223440       1 controller.go:78] Starting OpenAPI AggregationController
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.224862       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.225271       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.225417       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.225564       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.228940       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.229462       1 controller.go:116] Starting legacy_token_tracking_controller
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.229644       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.230522       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.230832       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0318 13:10:57.140470    2404 command_runner.go:130] ! I0318 13:09:48.231097       1 aggregator.go:164] waiting for initial CRD sync...
	I0318 13:10:57.141001    2404 command_runner.go:130] ! I0318 13:09:48.231395       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0318 13:10:57.141001    2404 command_runner.go:130] ! I0318 13:09:48.231642       1 available_controller.go:423] Starting AvailableConditionController
	I0318 13:10:57.141057    2404 command_runner.go:130] ! I0318 13:09:48.231846       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0318 13:10:57.141057    2404 command_runner.go:130] ! I0318 13:09:48.232024       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0318 13:10:57.141057    2404 command_runner.go:130] ! I0318 13:09:48.232223       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0318 13:10:57.141168    2404 command_runner.go:130] ! I0318 13:09:48.232638       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0318 13:10:57.141168    2404 command_runner.go:130] ! I0318 13:09:48.233228       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:10:57.141168    2404 command_runner.go:130] ! I0318 13:09:48.233501       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:10:57.141254    2404 command_runner.go:130] ! I0318 13:09:48.242598       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0318 13:10:57.141288    2404 command_runner.go:130] ! I0318 13:09:48.242850       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0318 13:10:57.141288    2404 command_runner.go:130] ! I0318 13:09:48.243085       1 controller.go:134] Starting OpenAPI controller
	I0318 13:10:57.141288    2404 command_runner.go:130] ! I0318 13:09:48.243289       1 controller.go:85] Starting OpenAPI V3 controller
	I0318 13:10:57.141288    2404 command_runner.go:130] ! I0318 13:09:48.243558       1 naming_controller.go:291] Starting NamingConditionController
	I0318 13:10:57.141346    2404 command_runner.go:130] ! I0318 13:09:48.243852       1 establishing_controller.go:76] Starting EstablishingController
	I0318 13:10:57.141369    2404 command_runner.go:130] ! I0318 13:09:48.244899       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.245178       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.245796       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.231958       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.403749       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.426183       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.426213       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.426382       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.432175       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.433073       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.433297       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.444484       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.444708       1 aggregator.go:166] initial CRD sync complete...
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.444961       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.445263       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.446443       1 cache.go:39] Caches are synced for autoregister controller
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:48.471536       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:49.257477       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 13:10:57.141396    2404 command_runner.go:130] ! W0318 13:09:49.806994       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.30.130.156]
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:49.809655       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:49.821460       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:51.622752       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:51.799195       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:51.812022       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:51.930541       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 13:10:57.141396    2404 command_runner.go:130] ! I0318 13:09:51.942099       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 13:10:57.147976    2404 logs.go:123] Gathering logs for kube-scheduler [66ee8be9fada] ...
	I0318 13:10:57.148047    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ee8be9fada"
	I0318 13:10:57.171495    2404 command_runner.go:130] ! I0318 13:09:45.699415       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:10:57.171495    2404 command_runner.go:130] ! W0318 13:09:48.342100       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 13:10:57.171495    2404 command_runner.go:130] ! W0318 13:09:48.342243       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:10:57.171495    2404 command_runner.go:130] ! W0318 13:09:48.342324       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 13:10:57.171495    2404 command_runner.go:130] ! W0318 13:09:48.342374       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:10:57.171495    2404 command_runner.go:130] ! I0318 13:09:48.402495       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 13:10:57.171495    2404 command_runner.go:130] ! I0318 13:09:48.402540       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:57.171495    2404 command_runner.go:130] ! I0318 13:09:48.407228       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:10:57.171495    2404 command_runner.go:130] ! I0318 13:09:48.409117       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:10:57.171495    2404 command_runner.go:130] ! I0318 13:09:48.410197       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 13:10:57.171495    2404 command_runner.go:130] ! I0318 13:09:48.410738       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:10:57.171495    2404 command_runner.go:130] ! I0318 13:09:48.510577       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:10:57.173502    2404 logs.go:123] Gathering logs for kindnet [c8e5ec25e910] ...
	I0318 13:10:57.173502    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e5ec25e910"
	I0318 13:10:57.204310    2404 command_runner.go:130] ! I0318 13:09:50.858529       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0318 13:10:57.204671    2404 command_runner.go:130] ! I0318 13:09:50.859271       1 main.go:107] hostIP = 172.30.130.156
	I0318 13:10:57.204671    2404 command_runner.go:130] ! podIP = 172.30.130.156
	I0318 13:10:57.204671    2404 command_runner.go:130] ! I0318 13:09:50.860380       1 main.go:116] setting mtu 1500 for CNI 
	I0318 13:10:57.204671    2404 command_runner.go:130] ! I0318 13:09:50.930132       1 main.go:146] kindnetd IP family: "ipv4"
	I0318 13:10:57.204671    2404 command_runner.go:130] ! I0318 13:09:50.933463       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0318 13:10:57.204671    2404 command_runner.go:130] ! I0318 13:10:21.283853       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0318 13:10:57.204767    2404 command_runner.go:130] ! I0318 13:10:21.335833       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:10:57.204767    2404 command_runner.go:130] ! I0318 13:10:21.335942       1 main.go:227] handling current node
	I0318 13:10:57.204818    2404 command_runner.go:130] ! I0318 13:10:21.336264       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.204818    2404 command_runner.go:130] ! I0318 13:10:21.336361       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.204857    2404 command_runner.go:130] ! I0318 13:10:21.336527       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.30.140.66 Flags: [] Table: 0} 
	I0318 13:10:57.204857    2404 command_runner.go:130] ! I0318 13:10:21.336670       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.204857    2404 command_runner.go:130] ! I0318 13:10:21.336680       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.204930    2404 command_runner.go:130] ! I0318 13:10:21.336727       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.30.137.140 Flags: [] Table: 0} 
	I0318 13:10:57.205005    2404 command_runner.go:130] ! I0318 13:10:31.343996       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:10:57.205005    2404 command_runner.go:130] ! I0318 13:10:31.344324       1 main.go:227] handling current node
	I0318 13:10:57.205005    2404 command_runner.go:130] ! I0318 13:10:31.344341       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.205050    2404 command_runner.go:130] ! I0318 13:10:31.344682       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.205050    2404 command_runner.go:130] ! I0318 13:10:31.345062       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.205050    2404 command_runner.go:130] ! I0318 13:10:31.345087       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.205050    2404 command_runner.go:130] ! I0318 13:10:41.357494       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:10:57.205107    2404 command_runner.go:130] ! I0318 13:10:41.357586       1 main.go:227] handling current node
	I0318 13:10:57.205107    2404 command_runner.go:130] ! I0318 13:10:41.357599       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.205107    2404 command_runner.go:130] ! I0318 13:10:41.357606       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.205107    2404 command_runner.go:130] ! I0318 13:10:41.357708       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.205187    2404 command_runner.go:130] ! I0318 13:10:41.357932       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.205187    2404 command_runner.go:130] ! I0318 13:10:51.367560       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:10:57.205187    2404 command_runner.go:130] ! I0318 13:10:51.367661       1 main.go:227] handling current node
	I0318 13:10:57.205187    2404 command_runner.go:130] ! I0318 13:10:51.367675       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:10:57.205243    2404 command_runner.go:130] ! I0318 13:10:51.367684       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:10:57.205243    2404 command_runner.go:130] ! I0318 13:10:51.367956       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:10:57.205243    2404 command_runner.go:130] ! I0318 13:10:51.368281       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:10:57.207559    2404 logs.go:123] Gathering logs for container status ...
	I0318 13:10:57.207559    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:10:57.315032    2404 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0318 13:10:57.315032    2404 command_runner.go:130] > c5d2074be239f       8c811b4aec35f                                                                                         4 seconds ago        Running             busybox                   1                   e20878b8092c2       busybox-5b5d89c9d6-c2997
	I0318 13:10:57.315032    2404 command_runner.go:130] > 3c3bc988c74cd       ead0a4a53df89                                                                                         4 seconds ago        Running             coredns                   1                   97583cc14f115       coredns-5dd5756b68-456tm
	I0318 13:10:57.315032    2404 command_runner.go:130] > eadcf41dad509       6e38f40d628db                                                                                         22 seconds ago       Running             storage-provisioner       2                   41035eff3b7db       storage-provisioner
	I0318 13:10:57.315032    2404 command_runner.go:130] > c8e5ec25e910e       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   86d74dec812cf       kindnet-hhsxh
	I0318 13:10:57.315032    2404 command_runner.go:130] > 46c0cf90d385f       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   41035eff3b7db       storage-provisioner
	I0318 13:10:57.315032    2404 command_runner.go:130] > 163ccabc3882a       83f6cc407eed8                                                                                         About a minute ago   Running             kube-proxy                1                   a9f21749669fe       kube-proxy-mc5tv
	I0318 13:10:57.315348    2404 command_runner.go:130] > 5f0887d1e6913       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      0                   354f3c44a34fc       etcd-multinode-894400
	I0318 13:10:57.315348    2404 command_runner.go:130] > 66ee8be9fada7       e3db313c6dbc0                                                                                         About a minute ago   Running             kube-scheduler            1                   6fb3325d3c100       kube-scheduler-multinode-894400
	I0318 13:10:57.315413    2404 command_runner.go:130] > fc4430c7fa204       7fe0e6f37db33                                                                                         About a minute ago   Running             kube-apiserver            0                   bc7236a19957e       kube-apiserver-multinode-894400
	I0318 13:10:57.315472    2404 command_runner.go:130] > 4ad6784a187d6       d058aa5ab969c                                                                                         About a minute ago   Running             kube-controller-manager   1                   066206d4c52cb       kube-controller-manager-multinode-894400
	I0318 13:10:57.315515    2404 command_runner.go:130] > dd031b5cb1e85       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   a23c1189be7c3       busybox-5b5d89c9d6-c2997
	I0318 13:10:57.315548    2404 command_runner.go:130] > 693a64f7472fd       ead0a4a53df89                                                                                         23 minutes ago       Exited              coredns                   0                   d001e299e996b       coredns-5dd5756b68-456tm
	I0318 13:10:57.315548    2404 command_runner.go:130] > c4d7018ad23a7       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              23 minutes ago       Exited              kindnet-cni               0                   a47b1fb60692c       kindnet-hhsxh
	I0318 13:10:57.315602    2404 command_runner.go:130] > 9335855aab63d       83f6cc407eed8                                                                                         23 minutes ago       Exited              kube-proxy                0                   60e9cd749c8f6       kube-proxy-mc5tv
	I0318 13:10:57.315602    2404 command_runner.go:130] > e4d42739ce0e9       e3db313c6dbc0                                                                                         23 minutes ago       Exited              kube-scheduler            0                   82710777e700c       kube-scheduler-multinode-894400
	I0318 13:10:57.315710    2404 command_runner.go:130] > 7aa5cf4ec378e       d058aa5ab969c                                                                                         23 minutes ago       Exited              kube-controller-manager   0                   5485f509825d9       kube-controller-manager-multinode-894400
	I0318 13:10:57.317870    2404 logs.go:123] Gathering logs for coredns [693a64f7472f] ...
	I0318 13:10:57.317915    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 693a64f7472f"
	I0318 13:10:57.351410    2404 command_runner.go:130] > .:53
	I0318 13:10:57.351951    2404 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fa2b120b3007aba3ef70c07d73473d14986404bfc5683cceeb89e8950d5ace894afca7d43365324975f99d1b2da3d6473c722fe82795d39a4ee8b4b969b7aa71
	I0318 13:10:57.351951    2404 command_runner.go:130] > CoreDNS-1.10.1
	I0318 13:10:57.351951    2404 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 13:10:57.352039    2404 command_runner.go:130] > [INFO] 127.0.0.1:33426 - 38858 "HINFO IN 7345450223813584863.4065419873971828575. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030234917s
	I0318 13:10:57.352097    2404 command_runner.go:130] > [INFO] 10.244.1.2:56777 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000311303s
	I0318 13:10:57.352097    2404 command_runner.go:130] > [INFO] 10.244.1.2:58024 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.098073876s
	I0318 13:10:57.352097    2404 command_runner.go:130] > [INFO] 10.244.1.2:57941 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.154978742s
	I0318 13:10:57.352163    2404 command_runner.go:130] > [INFO] 10.244.1.2:42576 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.156414777s
	I0318 13:10:57.352163    2404 command_runner.go:130] > [INFO] 10.244.0.3:43391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152802s
	I0318 13:10:57.352199    2404 command_runner.go:130] > [INFO] 10.244.0.3:52523 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000121101s
	I0318 13:10:57.352199    2404 command_runner.go:130] > [INFO] 10.244.0.3:36187 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000058401s
	I0318 13:10:57.352243    2404 command_runner.go:130] > [INFO] 10.244.0.3:33451 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000055s
	I0318 13:10:57.352243    2404 command_runner.go:130] > [INFO] 10.244.1.2:42180 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097901s
	I0318 13:10:57.352337    2404 command_runner.go:130] > [INFO] 10.244.1.2:60616 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.142731308s
	I0318 13:10:57.352337    2404 command_runner.go:130] > [INFO] 10.244.1.2:45190 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152502s
	I0318 13:10:57.352337    2404 command_runner.go:130] > [INFO] 10.244.1.2:55984 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150102s
	I0318 13:10:57.352337    2404 command_runner.go:130] > [INFO] 10.244.1.2:47725 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037970075s
	I0318 13:10:57.352411    2404 command_runner.go:130] > [INFO] 10.244.1.2:55620 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104901s
	I0318 13:10:57.352411    2404 command_runner.go:130] > [INFO] 10.244.1.2:60349 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000189802s
	I0318 13:10:57.352480    2404 command_runner.go:130] > [INFO] 10.244.1.2:44081 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089501s
	I0318 13:10:57.352480    2404 command_runner.go:130] > [INFO] 10.244.0.3:52580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182502s
	I0318 13:10:57.352480    2404 command_runner.go:130] > [INFO] 10.244.0.3:60982 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000727s
	I0318 13:10:57.352480    2404 command_runner.go:130] > [INFO] 10.244.0.3:53685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081s
	I0318 13:10:57.352480    2404 command_runner.go:130] > [INFO] 10.244.0.3:38117 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127701s
	I0318 13:10:57.352559    2404 command_runner.go:130] > [INFO] 10.244.0.3:38455 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000117101s
	I0318 13:10:57.352559    2404 command_runner.go:130] > [INFO] 10.244.0.3:50629 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121702s
	I0318 13:10:57.352559    2404 command_runner.go:130] > [INFO] 10.244.0.3:33301 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000487s
	I0318 13:10:57.352559    2404 command_runner.go:130] > [INFO] 10.244.0.3:38091 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138402s
	I0318 13:10:57.352559    2404 command_runner.go:130] > [INFO] 10.244.1.2:43364 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192902s
	I0318 13:10:57.352559    2404 command_runner.go:130] > [INFO] 10.244.1.2:42609 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060701s
	I0318 13:10:57.352559    2404 command_runner.go:130] > [INFO] 10.244.1.2:36443 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051301s
	I0318 13:10:57.352660    2404 command_runner.go:130] > [INFO] 10.244.1.2:56414 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000526s
	I0318 13:10:57.352660    2404 command_runner.go:130] > [INFO] 10.244.0.3:50774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137201s
	I0318 13:10:57.352701    2404 command_runner.go:130] > [INFO] 10.244.0.3:43237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000196902s
	I0318 13:10:57.352728    2404 command_runner.go:130] > [INFO] 10.244.0.3:38831 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059901s
	I0318 13:10:57.352770    2404 command_runner.go:130] > [INFO] 10.244.0.3:56163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122801s
	I0318 13:10:57.352770    2404 command_runner.go:130] > [INFO] 10.244.1.2:58305 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209602s
	I0318 13:10:57.352817    2404 command_runner.go:130] > [INFO] 10.244.1.2:58291 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000151202s
	I0318 13:10:57.352839    2404 command_runner.go:130] > [INFO] 10.244.1.2:33227 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184302s
	I0318 13:10:57.352871    2404 command_runner.go:130] > [INFO] 10.244.1.2:58179 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152102s
	I0318 13:10:57.352871    2404 command_runner.go:130] > [INFO] 10.244.0.3:46943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104101s
	I0318 13:10:57.352903    2404 command_runner.go:130] > [INFO] 10.244.0.3:58018 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000107001s
	I0318 13:10:57.352948    2404 command_runner.go:130] > [INFO] 10.244.0.3:35353 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119601s
	I0318 13:10:57.352948    2404 command_runner.go:130] > [INFO] 10.244.0.3:58763 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075701s
	I0318 13:10:57.353040    2404 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0318 13:10:57.353040    2404 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0318 13:10:57.355372    2404 logs.go:123] Gathering logs for coredns [3c3bc988c74c] ...
	I0318 13:10:57.355372    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c3bc988c74c"
	I0318 13:10:57.384552    2404 command_runner.go:130] > .:53
	I0318 13:10:57.384552    2404 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fa2b120b3007aba3ef70c07d73473d14986404bfc5683cceeb89e8950d5ace894afca7d43365324975f99d1b2da3d6473c722fe82795d39a4ee8b4b969b7aa71
	I0318 13:10:57.384552    2404 command_runner.go:130] > CoreDNS-1.10.1
	I0318 13:10:57.384552    2404 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 13:10:57.384552    2404 command_runner.go:130] > [INFO] 127.0.0.1:47251 - 801 "HINFO IN 2968659138506762197.6766024496084331989. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051583557s
	I0318 13:10:57.385564    2404 logs.go:123] Gathering logs for kube-scheduler [e4d42739ce0e] ...
	I0318 13:10:57.385564    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4d42739ce0e"
	I0318 13:10:57.410550    2404 command_runner.go:130] ! I0318 12:47:23.427784       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:10:57.411296    2404 command_runner.go:130] ! W0318 12:47:24.381993       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 13:10:57.411425    2404 command_runner.go:130] ! W0318 12:47:24.382186       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:10:57.411461    2404 command_runner.go:130] ! W0318 12:47:24.382237       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 13:10:57.411521    2404 command_runner.go:130] ! W0318 12:47:24.382251       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:10:57.411773    2404 command_runner.go:130] ! I0318 12:47:24.461225       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 13:10:57.411773    2404 command_runner.go:130] ! I0318 12:47:24.461511       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:57.411773    2404 command_runner.go:130] ! I0318 12:47:24.465946       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 13:10:57.411773    2404 command_runner.go:130] ! I0318 12:47:24.466246       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:10:57.411773    2404 command_runner.go:130] ! I0318 12:47:24.466280       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:10:57.411773    2404 command_runner.go:130] ! I0318 12:47:24.473793       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:10:57.411773    2404 command_runner.go:130] ! W0318 12:47:24.487135       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:10:57.411773    2404 command_runner.go:130] ! E0318 12:47:24.487240       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:10:57.411773    2404 command_runner.go:130] ! W0318 12:47:24.519325       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:10:57.411773    2404 command_runner.go:130] ! E0318 12:47:24.519853       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:10:57.411773    2404 command_runner.go:130] ! W0318 12:47:24.520361       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:10:57.411773    2404 command_runner.go:130] ! E0318 12:47:24.520484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:10:57.411773    2404 command_runner.go:130] ! W0318 12:47:24.520711       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:10:57.411773    2404 command_runner.go:130] ! E0318 12:47:24.522735       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:10:57.412311    2404 command_runner.go:130] ! W0318 12:47:24.523312       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:10:57.412311    2404 command_runner.go:130] ! E0318 12:47:24.523462       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:10:57.412311    2404 command_runner.go:130] ! W0318 12:47:24.523710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:10:57.412570    2404 command_runner.go:130] ! E0318 12:47:24.523900       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:10:57.412570    2404 command_runner.go:130] ! W0318 12:47:24.524226       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.412652    2404 command_runner.go:130] ! E0318 12:47:24.524422       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.412685    2404 command_runner.go:130] ! W0318 12:47:24.524710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:10:57.412685    2404 command_runner.go:130] ! E0318 12:47:24.525125       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:10:57.412731    2404 command_runner.go:130] ! W0318 12:47:24.525523       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.412769    2404 command_runner.go:130] ! E0318 12:47:24.525746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.412833    2404 command_runner.go:130] ! W0318 12:47:24.526240       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:10:57.412866    2404 command_runner.go:130] ! E0318 12:47:24.526443       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:10:57.412866    2404 command_runner.go:130] ! W0318 12:47:24.526703       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! E0318 12:47:24.526852       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! W0318 12:47:24.527382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! E0318 12:47:24.527873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! W0318 12:47:24.528117       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! E0318 12:47:24.528748       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! W0318 12:47:24.529179       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! E0318 12:47:24.529832       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! W0318 12:47:24.530406       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! E0318 12:47:24.532696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! W0318 12:47:25.371082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! E0318 12:47:25.371130       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! W0318 12:47:25.374605       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! E0318 12:47:25.374678       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! W0318 12:47:25.400777       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! E0318 12:47:25.400820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:10:57.412926    2404 command_runner.go:130] ! W0318 12:47:25.434442       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:10:57.413458    2404 command_runner.go:130] ! E0318 12:47:25.434526       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:10:57.413458    2404 command_runner.go:130] ! W0318 12:47:25.456878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:10:57.413458    2404 command_runner.go:130] ! E0318 12:47:25.457121       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:10:57.413678    2404 command_runner.go:130] ! W0318 12:47:25.744652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:10:57.413678    2404 command_runner.go:130] ! E0318 12:47:25.744733       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:10:57.413753    2404 command_runner.go:130] ! W0318 12:47:25.777073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.413836    2404 command_runner.go:130] ! E0318 12:47:25.777145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.413892    2404 command_runner.go:130] ! W0318 12:47:25.850949       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! E0318 12:47:25.850985       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! W0318 12:47:25.876908       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! E0318 12:47:25.877170       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! W0318 12:47:25.892072       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! E0318 12:47:25.892099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! W0318 12:47:25.988864       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! E0318 12:47:25.988912       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! W0318 12:47:26.044749       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! E0318 12:47:26.044834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! W0318 12:47:26.067659       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! E0318 12:47:26.068250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:10:57.413952    2404 command_runner.go:130] ! I0318 12:47:28.178584       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:10:57.413952    2404 command_runner.go:130] ! I0318 13:07:24.107367       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0318 13:10:57.414493    2404 command_runner.go:130] ! I0318 13:07:24.107975       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0318 13:10:57.414493    2404 command_runner.go:130] ! E0318 13:07:24.108193       1 run.go:74] "command failed" err="finished without leader elect"
	I0318 13:10:57.424509    2404 logs.go:123] Gathering logs for kube-proxy [9335855aab63] ...
	I0318 13:10:57.424509    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335855aab63"
	I0318 13:10:57.451123    2404 command_runner.go:130] ! I0318 12:47:42.888603       1 server_others.go:69] "Using iptables proxy"
	I0318 13:10:57.451987    2404 command_runner.go:130] ! I0318 12:47:42.909658       1 node.go:141] Successfully retrieved node IP: 172.30.129.141
	I0318 13:10:57.451987    2404 command_runner.go:130] ! I0318 12:47:42.965774       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:10:57.451987    2404 command_runner.go:130] ! I0318 12:47:42.965824       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:10:57.452077    2404 command_runner.go:130] ! I0318 12:47:42.983172       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:10:57.452149    2404 command_runner.go:130] ! I0318 12:47:42.983221       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:10:57.452149    2404 command_runner.go:130] ! I0318 12:47:42.983471       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:10:57.452149    2404 command_runner.go:130] ! I0318 12:47:42.983484       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:57.452149    2404 command_runner.go:130] ! I0318 12:47:42.987719       1 config.go:188] "Starting service config controller"
	I0318 13:10:57.452149    2404 command_runner.go:130] ! I0318 12:47:42.987733       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:10:57.452245    2404 command_runner.go:130] ! I0318 12:47:42.987775       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:10:57.452245    2404 command_runner.go:130] ! I0318 12:47:42.987781       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:10:57.452519    2404 command_runner.go:130] ! I0318 12:47:42.988298       1 config.go:315] "Starting node config controller"
	I0318 13:10:57.452519    2404 command_runner.go:130] ! I0318 12:47:42.988306       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:10:57.452519    2404 command_runner.go:130] ! I0318 12:47:43.088485       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:10:57.452519    2404 command_runner.go:130] ! I0318 12:47:43.088594       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:10:57.452519    2404 command_runner.go:130] ! I0318 12:47:43.088517       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:10:57.454115    2404 logs.go:123] Gathering logs for kube-controller-manager [4ad6784a187d] ...
	I0318 13:10:57.454115    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ad6784a187d"
	I0318 13:10:57.480580    2404 command_runner.go:130] ! I0318 13:09:46.053304       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:10:57.480580    2404 command_runner.go:130] ! I0318 13:09:46.598188       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:46.598275       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:46.600550       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:46.600856       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:46.601228       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:46.601416       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:50.365580       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:50.380467       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:50.380609       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:50.380622       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:50.396606       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:50.396766       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:09:50.466364       1 shared_informer.go:318] Caches are synced for tokens
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.425018       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.425185       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.425608       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.425649       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.429368       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.429570       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.429653       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.432615       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.435149       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.435476       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.435957       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.436324       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.436534       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 13:10:57.480642    2404 command_runner.go:130] ! E0318 13:10:00.440226       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.440586       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! E0318 13:10:00.443615       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 13:10:57.480642    2404 command_runner.go:130] ! I0318 13:10:00.443912       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 13:10:57.481205    2404 command_runner.go:130] ! I0318 13:10:00.446716       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 13:10:57.481205    2404 command_runner.go:130] ! I0318 13:10:00.446764       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 13:10:57.481205    2404 command_runner.go:130] ! I0318 13:10:00.447388       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 13:10:57.481282    2404 command_runner.go:130] ! I0318 13:10:00.450136       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 13:10:57.481282    2404 command_runner.go:130] ! I0318 13:10:00.450514       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 13:10:57.481282    2404 command_runner.go:130] ! I0318 13:10:00.450816       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 13:10:57.481353    2404 command_runner.go:130] ! I0318 13:10:00.482128       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 13:10:57.481420    2404 command_runner.go:130] ! I0318 13:10:00.482431       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 13:10:57.481420    2404 command_runner.go:130] ! I0318 13:10:00.482564       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 13:10:57.481420    2404 command_runner.go:130] ! I0318 13:10:00.485138       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 13:10:57.481452    2404 command_runner.go:130] ! I0318 13:10:00.485477       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 13:10:57.481452    2404 command_runner.go:130] ! I0318 13:10:00.485637       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 13:10:57.481452    2404 command_runner.go:130] ! I0318 13:10:00.485765       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 13:10:57.481493    2404 command_runner.go:130] ! I0318 13:10:00.487736       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 13:10:57.481493    2404 command_runner.go:130] ! I0318 13:10:00.488836       1 gc_controller.go:101] "Starting GC controller"
	I0318 13:10:57.481525    2404 command_runner.go:130] ! I0318 13:10:00.489018       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 13:10:57.481525    2404 command_runner.go:130] ! I0318 13:10:00.490586       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 13:10:57.481525    2404 command_runner.go:130] ! I0318 13:10:00.491164       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 13:10:57.481525    2404 command_runner.go:130] ! I0318 13:10:00.491311       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 13:10:57.481576    2404 command_runner.go:130] ! I0318 13:10:00.494562       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 13:10:57.481576    2404 command_runner.go:130] ! I0318 13:10:00.495002       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 13:10:57.481576    2404 command_runner.go:130] ! I0318 13:10:00.495133       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 13:10:57.481627    2404 command_runner.go:130] ! I0318 13:10:00.497694       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 13:10:57.481627    2404 command_runner.go:130] ! I0318 13:10:00.497986       1 expand_controller.go:328] "Starting expand controller"
	I0318 13:10:57.481627    2404 command_runner.go:130] ! I0318 13:10:00.498025       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 13:10:57.481627    2404 command_runner.go:130] ! I0318 13:10:00.500933       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 13:10:57.481682    2404 command_runner.go:130] ! I0318 13:10:00.502880       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 13:10:57.481682    2404 command_runner.go:130] ! I0318 13:10:00.503102       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 13:10:57.481682    2404 command_runner.go:130] ! I0318 13:10:00.506760       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 13:10:57.481682    2404 command_runner.go:130] ! I0318 13:10:00.507227       1 disruption.go:433] "Sending events to api server."
	I0318 13:10:57.481737    2404 command_runner.go:130] ! I0318 13:10:00.507302       1 disruption.go:444] "Starting disruption controller"
	I0318 13:10:57.481737    2404 command_runner.go:130] ! I0318 13:10:00.507366       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 13:10:57.481737    2404 command_runner.go:130] ! I0318 13:10:00.509815       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 13:10:57.481737    2404 command_runner.go:130] ! I0318 13:10:00.510402       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 13:10:57.481793    2404 command_runner.go:130] ! I0318 13:10:00.510478       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 13:10:57.481793    2404 command_runner.go:130] ! I0318 13:10:00.514582       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 13:10:57.481793    2404 command_runner.go:130] ! I0318 13:10:00.514842       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 13:10:57.481843    2404 command_runner.go:130] ! I0318 13:10:00.514832       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:10:57.481843    2404 command_runner.go:130] ! I0318 13:10:00.517859       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 13:10:57.481843    2404 command_runner.go:130] ! I0318 13:10:00.518134       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 13:10:57.481898    2404 command_runner.go:130] ! I0318 13:10:00.518434       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:10:57.481898    2404 command_runner.go:130] ! I0318 13:10:00.519400       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 13:10:57.481898    2404 command_runner.go:130] ! I0318 13:10:00.519576       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 13:10:57.481947    2404 command_runner.go:130] ! I0318 13:10:00.519729       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:10:57.481997    2404 command_runner.go:130] ! I0318 13:10:00.519883       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 13:10:57.481997    2404 command_runner.go:130] ! I0318 13:10:00.519902       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 13:10:57.482047    2404 command_runner.go:130] ! I0318 13:10:00.520909       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 13:10:57.482047    2404 command_runner.go:130] ! I0318 13:10:00.519914       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:10:57.482047    2404 command_runner.go:130] ! I0318 13:10:00.524690       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 13:10:57.482102    2404 command_runner.go:130] ! I0318 13:10:00.524967       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 13:10:57.482102    2404 command_runner.go:130] ! I0318 13:10:00.525267       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 13:10:57.482102    2404 command_runner.go:130] ! I0318 13:10:00.528248       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 13:10:57.482152    2404 command_runner.go:130] ! I0318 13:10:00.528509       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 13:10:57.482152    2404 command_runner.go:130] ! I0318 13:10:00.528721       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 13:10:57.482152    2404 command_runner.go:130] ! I0318 13:10:00.532254       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 13:10:57.482152    2404 command_runner.go:130] ! I0318 13:10:00.532687       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 13:10:57.482206    2404 command_runner.go:130] ! I0318 13:10:00.532717       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 13:10:57.482206    2404 command_runner.go:130] ! I0318 13:10:00.544900       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 13:10:57.482206    2404 command_runner.go:130] ! I0318 13:10:00.545135       1 horizontal.go:200] "Starting HPA controller"
	I0318 13:10:57.482206    2404 command_runner.go:130] ! I0318 13:10:00.545195       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 13:10:57.482206    2404 command_runner.go:130] ! I0318 13:10:00.547641       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 13:10:57.482280    2404 command_runner.go:130] ! I0318 13:10:00.548078       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 13:10:57.482280    2404 command_runner.go:130] ! I0318 13:10:00.550784       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 13:10:57.482280    2404 command_runner.go:130] ! I0318 13:10:00.551368       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 13:10:57.482335    2404 command_runner.go:130] ! I0318 13:10:00.551557       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 13:10:57.482335    2404 command_runner.go:130] ! I0318 13:10:00.551931       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 13:10:57.482335    2404 command_runner.go:130] ! I0318 13:10:00.551452       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 13:10:57.482410    2404 command_runner.go:130] ! I0318 13:10:00.553190       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 13:10:57.482410    2404 command_runner.go:130] ! I0318 13:10:00.553856       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 13:10:57.482410    2404 command_runner.go:130] ! I0318 13:10:00.554970       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 13:10:57.482410    2404 command_runner.go:130] ! I0318 13:10:00.555558       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 13:10:57.482468    2404 command_runner.go:130] ! I0318 13:10:00.555718       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 13:10:57.482468    2404 command_runner.go:130] ! I0318 13:10:00.558545       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 13:10:57.482468    2404 command_runner.go:130] ! I0318 13:10:00.558805       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 13:10:57.482523    2404 command_runner.go:130] ! I0318 13:10:00.558956       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 13:10:57.482523    2404 command_runner.go:130] ! W0318 13:10:00.765746       1 shared_informer.go:593] resyncPeriod 13h51m37.636447347s is smaller than resyncCheckPeriod 22h45m34.293132222s and the informer has already started. Changing it to 22h45m34.293132222s
	I0318 13:10:57.482561    2404 command_runner.go:130] ! I0318 13:10:00.765905       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 13:10:57.482606    2404 command_runner.go:130] ! I0318 13:10:00.766015       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 13:10:57.482606    2404 command_runner.go:130] ! I0318 13:10:00.766141       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 13:10:57.482645    2404 command_runner.go:130] ! I0318 13:10:00.766231       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 13:10:57.482645    2404 command_runner.go:130] ! I0318 13:10:00.767946       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 13:10:57.482689    2404 command_runner.go:130] ! I0318 13:10:00.768138       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 13:10:57.482689    2404 command_runner.go:130] ! I0318 13:10:00.768175       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 13:10:57.482689    2404 command_runner.go:130] ! I0318 13:10:00.768271       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 13:10:57.482753    2404 command_runner.go:130] ! I0318 13:10:00.768411       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 13:10:57.482753    2404 command_runner.go:130] ! I0318 13:10:00.768529       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 13:10:57.482811    2404 command_runner.go:130] ! I0318 13:10:00.768565       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 13:10:57.482834    2404 command_runner.go:130] ! I0318 13:10:00.768633       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! W0318 13:10:00.768841       1 shared_informer.go:593] resyncPeriod 17h39m7.901162259s is smaller than resyncCheckPeriod 22h45m34.293132222s and the informer has already started. Changing it to 22h45m34.293132222s
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769020       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769077       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769115       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769206       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769280       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769427       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769509       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769668       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769816       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769832       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769855       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.769714       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.906184       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.906404       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.906702       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.906740       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.956245       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.956457       1 job_controller.go:226] "Starting job controller"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:00.956765       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:01.056144       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:01.056251       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:01.056576       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:01.156303       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:01.156762       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:01.156852       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:01.205282       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 13:10:57.482860    2404 command_runner.go:130] ! I0318 13:10:01.205353       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 13:10:57.483398    2404 command_runner.go:130] ! I0318 13:10:01.205368       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 13:10:57.483398    2404 command_runner.go:130] ! I0318 13:10:01.256513       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 13:10:57.483398    2404 command_runner.go:130] ! I0318 13:10:01.256828       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 13:10:57.483466    2404 command_runner.go:130] ! I0318 13:10:01.256867       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 13:10:57.483466    2404 command_runner.go:130] ! I0318 13:10:01.306581       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 13:10:57.483466    2404 command_runner.go:130] ! I0318 13:10:01.306969       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 13:10:57.483511    2404 command_runner.go:130] ! I0318 13:10:01.307156       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 13:10:57.483511    2404 command_runner.go:130] ! I0318 13:10:01.317298       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:10:57.483511    2404 command_runner.go:130] ! I0318 13:10:01.349149       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 13:10:57.483550    2404 command_runner.go:130] ! I0318 13:10:01.369957       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:10:57.483550    2404 command_runner.go:130] ! I0318 13:10:01.371629       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400\" does not exist"
	I0318 13:10:57.483550    2404 command_runner.go:130] ! I0318 13:10:01.371840       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m02\" does not exist"
	I0318 13:10:57.483648    2404 command_runner.go:130] ! I0318 13:10:01.372556       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.483648    2404 command_runner.go:130] ! I0318 13:10:01.372879       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m03\" does not exist"
	I0318 13:10:57.483648    2404 command_runner.go:130] ! I0318 13:10:01.373004       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.483727    2404 command_runner.go:130] ! I0318 13:10:01.380690       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 13:10:57.483727    2404 command_runner.go:130] ! I0318 13:10:01.383858       1 shared_informer.go:318] Caches are synced for namespace
	I0318 13:10:57.483768    2404 command_runner.go:130] ! I0318 13:10:01.390400       1 shared_informer.go:318] Caches are synced for GC
	I0318 13:10:57.483768    2404 command_runner.go:130] ! I0318 13:10:01.391669       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 13:10:57.483768    2404 command_runner.go:130] ! I0318 13:10:01.398208       1 shared_informer.go:318] Caches are synced for expand
	I0318 13:10:57.483768    2404 command_runner.go:130] ! I0318 13:10:01.403691       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 13:10:57.483829    2404 command_runner.go:130] ! I0318 13:10:01.406154       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 13:10:57.483829    2404 command_runner.go:130] ! I0318 13:10:01.407387       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 13:10:57.483859    2404 command_runner.go:130] ! I0318 13:10:01.407463       1 shared_informer.go:318] Caches are synced for disruption
	I0318 13:10:57.483859    2404 command_runner.go:130] ! I0318 13:10:01.411470       1 shared_informer.go:318] Caches are synced for deployment
	I0318 13:10:57.483859    2404 command_runner.go:130] ! I0318 13:10:01.415591       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 13:10:57.483859    2404 command_runner.go:130] ! I0318 13:10:01.419985       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 13:10:57.483859    2404 command_runner.go:130] ! I0318 13:10:01.420028       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 13:10:57.483859    2404 command_runner.go:130] ! I0318 13:10:01.422567       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 13:10:57.483859    2404 command_runner.go:130] ! I0318 13:10:01.426386       1 shared_informer.go:318] Caches are synced for node
	I0318 13:10:57.483960    2404 command_runner.go:130] ! I0318 13:10:01.426502       1 range_allocator.go:174] "Sending events to api server"
	I0318 13:10:57.483981    2404 command_runner.go:130] ! I0318 13:10:01.426637       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 13:10:57.483981    2404 command_runner.go:130] ! I0318 13:10:01.426705       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 13:10:57.483981    2404 command_runner.go:130] ! I0318 13:10:01.426892       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 13:10:57.483981    2404 command_runner.go:130] ! I0318 13:10:01.426546       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 13:10:57.483981    2404 command_runner.go:130] ! I0318 13:10:01.429986       1 shared_informer.go:318] Caches are synced for service account
	I0318 13:10:57.484046    2404 command_runner.go:130] ! I0318 13:10:01.430014       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 13:10:57.484046    2404 command_runner.go:130] ! I0318 13:10:01.433506       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 13:10:57.484046    2404 command_runner.go:130] ! I0318 13:10:01.437710       1 shared_informer.go:318] Caches are synced for TTL
	I0318 13:10:57.484100    2404 command_runner.go:130] ! I0318 13:10:01.445429       1 shared_informer.go:318] Caches are synced for HPA
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.448863       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.451599       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.454157       1 shared_informer.go:318] Caches are synced for taint
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.454304       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.454496       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.454532       1 taint_manager.go:210] "Sending events to api server"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.455374       1 event.go:307] "Event occurred" object="multinode-894400" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400 event: Registered Node multinode-894400 in Controller"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.455390       1 event.go:307] "Event occurred" object="multinode-894400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.455400       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.456700       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.456719       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.457835       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.457861       1 shared_informer.go:318] Caches are synced for job
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.458132       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.499926       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.502022       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.502582       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m02"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.502665       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m03"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.505439       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.518153       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.524442       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="116.887006ms"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.526447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="62.302µs"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.532190       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="124.57225ms"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.532535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.501µs"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.536870       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.559571       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.576497       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:01.970420       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:02.008120       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:02.008146       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:23.798396       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.484120    2404 command_runner.go:130] ! I0318 13:10:26.538088       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-456tm" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-456tm"
	I0318 13:10:57.484656    2404 command_runner.go:130] ! I0318 13:10:26.538124       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-c2997" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-c2997"
	I0318 13:10:57.484722    2404 command_runner.go:130] ! I0318 13:10:26.538134       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0318 13:10:57.484722    2404 command_runner.go:130] ! I0318 13:10:41.556645       1 event.go:307] "Event occurred" object="multinode-894400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-894400-m02 status is now: NodeNotReady"
	I0318 13:10:57.484722    2404 command_runner.go:130] ! I0318 13:10:41.569274       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8btgf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:10:57.484797    2404 command_runner.go:130] ! I0318 13:10:41.592766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="22.447202ms"
	I0318 13:10:57.484797    2404 command_runner.go:130] ! I0318 13:10:41.593427       1 event.go:307] "Event occurred" object="kube-system/kindnet-k5lpg" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:10:57.484826    2404 command_runner.go:130] ! I0318 13:10:41.595199       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="39.101µs"
	I0318 13:10:57.484862    2404 command_runner.go:130] ! I0318 13:10:41.617007       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-8bdmn" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:10:57.484862    2404 command_runner.go:130] ! I0318 13:10:54.102255       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.438427ms"
	I0318 13:10:57.484862    2404 command_runner.go:130] ! I0318 13:10:54.102713       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="266.302µs"
	I0318 13:10:57.484930    2404 command_runner.go:130] ! I0318 13:10:54.115993       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="210.701µs"
	I0318 13:10:57.484930    2404 command_runner.go:130] ! I0318 13:10:55.131550       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.807636ms"
	I0318 13:10:57.484930    2404 command_runner.go:130] ! I0318 13:10:55.131763       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.301µs"
	I0318 13:10:57.498419    2404 logs.go:123] Gathering logs for kubelet ...
	I0318 13:10:57.498419    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: I0318 13:09:39.912330    1399 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: I0318 13:09:39.913472    1399 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: I0318 13:09:39.914280    1399 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: E0318 13:09:39.914469    1399 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: I0318 13:09:40.661100    1455 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: I0318 13:09:40.661586    1455 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: I0318 13:09:40.662255    1455 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: E0318 13:09:40.662383    1455 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.774439    1532 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.775083    1532 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.775946    1532 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.785429    1532 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.801370    1532 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.849790    1532 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851652    1532 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851916    1532 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851957    1532 topology_manager.go:138] "Creating topology manager with none policy"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851967    1532 container_manager_linux.go:301] "Creating device plugin manager"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.853347    1532 state_mem.go:36] "Initialized new in-memory state store"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.855331    1532 kubelet.go:393] "Attempting to sync node with API server"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.855456    1532 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.856520    1532 kubelet.go:309] "Adding apiserver pod source"
	I0318 13:10:57.517413    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.856554    1532 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.859153    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.859647    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.860993    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.861168    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.872782    1532 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="docker" version="25.0.4" apiVersion="v1"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.875640    1532 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.876823    1532 server.go:1232] "Started kubelet"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.878282    1532 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.879215    1532 server.go:462] "Adding debug handlers to kubelet server"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.882881    1532 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.883660    1532 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.878365    1532 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.886734    1532 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-894400.17bddddee5b23bca", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-894400", UID:"multinode-894400", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-894400"}, FirstTimestamp:time.Date(2024, ti
me.March, 18, 13, 9, 42, 876797898, time.Local), LastTimestamp:time.Date(2024, time.March, 18, 13, 9, 42, 876797898, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-894400"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.30.130.156:8443: connect: connection refused'(may retry after sleeping)
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.886969    1532 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.887086    1532 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.907405    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.907883    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.910785    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="200ms"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.959085    1532 reconciler_new.go:29] "Reconciler: start to sync state"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.981490    1532 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.981531    1532 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.981561    1532 state_mem.go:36] "Initialized new in-memory state store"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.982644    1532 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.982700    1532 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.982728    1532 policy_none.go:49] "None policy: Start"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.989705    1532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.002857    1532 memory_manager.go:169] "Starting memorymanager" policy="None"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.003620    1532 state_mem.go:35] "Initializing new in-memory state store"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.004623    1532 state_mem.go:75] "Updated machine memory state"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.006120    1532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.007397    1532 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.008604    1532 kubelet.go:2303] "Starting kubelet main sync loop"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.008971    1532 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.016115    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.018685    1532 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 13:10:57.518426    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.021241    1532 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: W0318 13:09:43.022840    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.022916    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.022979    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.023116    1532 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.041923    1532 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-894400\" not found"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.112352    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="400ms"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.113553    1532 topology_manager.go:215] "Topology Admit Handler" podUID="1c745e9b917877b1ff3c90ed02e9a79a" podNamespace="kube-system" podName="kube-scheduler-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.126661    1532 topology_manager.go:215] "Topology Admit Handler" podUID="6096c2227c4230453f65f86ebdcd0d95" podNamespace="kube-system" podName="kube-apiserver-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.137838    1532 topology_manager.go:215] "Topology Admit Handler" podUID="d340aced56ba169ecac1e3ac58ad57fe" podNamespace="kube-system" podName="kube-controller-manager-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.154701    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5485f509825d9272a84959cbcfbb4f0187be886867ba7bac76fa00a35e34bdd1"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.154826    1532 topology_manager.go:215] "Topology Admit Handler" podUID="743a549b698f93b8586a236f83c90556" podNamespace="kube-system" podName="etcd-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171660    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d001e299e996b74f090c73f4e98605ef0d323a96826d23424e724ca8e7fe466a"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171681    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60e9cd749c8f67d0bc24596b26b654cf85a82055f89e14c4a14a4e9342f5fc9f"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171704    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acffce2e73842c3e46177a77ddd5a8d308b51daf062cac439cc487cc863c4226"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171714    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="265b39e386cfa82eef9715aba314fbf8a9292776816cf86ed4099004698cb320"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171723    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="220884cbf1f5b852987c5a28277a4914502f0623413c284054afa92791494c50"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171731    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a47b1fb60692cee0c4ed89ecc511fa046c0873051f7daf026f1c5c6a3dfd7352"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.172283    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82710777e700c4f2e71da911834959efc480f8ba2a526049f0f6c238947c5146"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.186382    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a23c1189be7c39f22c8a59ba41b198519894c9783ecad8dcda79fb8a76f05254"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.231617    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.233479    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.267903    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c745e9b917877b1ff3c90ed02e9a79a-kubeconfig\") pod \"kube-scheduler-multinode-894400\" (UID: \"1c745e9b917877b1ff3c90ed02e9a79a\") " pod="kube-system/kube-scheduler-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268106    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6096c2227c4230453f65f86ebdcd0d95-ca-certs\") pod \"kube-apiserver-multinode-894400\" (UID: \"6096c2227c4230453f65f86ebdcd0d95\") " pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268214    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-ca-certs\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268242    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-kubeconfig\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268269    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268295    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/743a549b698f93b8586a236f83c90556-etcd-certs\") pod \"etcd-multinode-894400\" (UID: \"743a549b698f93b8586a236f83c90556\") " pod="kube-system/etcd-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268330    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/743a549b698f93b8586a236f83c90556-etcd-data\") pod \"etcd-multinode-894400\" (UID: \"743a549b698f93b8586a236f83c90556\") " pod="kube-system/etcd-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268361    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6096c2227c4230453f65f86ebdcd0d95-k8s-certs\") pod \"kube-apiserver-multinode-894400\" (UID: \"6096c2227c4230453f65f86ebdcd0d95\") " pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268423    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6096c2227c4230453f65f86ebdcd0d95-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-894400\" (UID: \"6096c2227c4230453f65f86ebdcd0d95\") " pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268445    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-flexvolume-dir\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268537    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-k8s-certs\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:10:57.519422    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.513563    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="800ms"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.656950    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.658595    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: W0318 13:09:43.917173    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.917511    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: W0318 13:09:44.022640    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.022973    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: W0318 13:09:44.114653    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.114784    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: I0318 13:09:44.229821    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="354f3c44a34fcbd9ca7826066f2b4c40b3a6551aa632389815c5d24e85e0472b"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.315351    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="1.6s"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: W0318 13:09:44.368370    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.368575    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: I0318 13:09:44.495686    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.496847    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:46 multinode-894400 kubelet[1532]: I0318 13:09:46.112867    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.454296    1532 kubelet_node_status.go:108] "Node was previously registered" node="multinode-894400"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.454504    1532 kubelet_node_status.go:73] "Successfully registered node" node="multinode-894400"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.466215    1532 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.467399    1532 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.481710    1532 setters.go:552] "Node became not ready" node="multinode-894400" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-18T13:09:48Z","lastTransitionTime":"2024-03-18T13:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.865400    1532 apiserver.go:52] "Watching apiserver"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872433    1532 topology_manager.go:215] "Topology Admit Handler" podUID="0afe25f8-cbd6-412b-8698-7b547d1d49ca" podNamespace="kube-system" podName="kube-proxy-mc5tv"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872584    1532 topology_manager.go:215] "Topology Admit Handler" podUID="0161d239-2d85-4246-b2fa-6c7374f2ecd6" podNamespace="kube-system" podName="kindnet-hhsxh"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872794    1532 topology_manager.go:215] "Topology Admit Handler" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67" podNamespace="kube-system" podName="coredns-5dd5756b68-456tm"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872862    1532 topology_manager.go:215] "Topology Admit Handler" podUID="219bafbc-d807-44cf-9927-e4957f36ad70" podNamespace="kube-system" podName="storage-provisioner"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872944    1532 topology_manager.go:215] "Topology Admit Handler" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f" podNamespace="default" podName="busybox-5b5d89c9d6-c2997"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.873248    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.873593    1532 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-894400" podUID="62aca0ea-36b0-4841-9616-61448f45e04a"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.873861    1532 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-894400" podUID="672a85d9-7526-4870-a33a-eac509ef3c3f"
	I0318 13:10:57.520418    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.876751    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.889248    1532 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.964782    1532 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.965861    1532 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-894400"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966709    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0afe25f8-cbd6-412b-8698-7b547d1d49ca-lib-modules\") pod \"kube-proxy-mc5tv\" (UID: \"0afe25f8-cbd6-412b-8698-7b547d1d49ca\") " pod="kube-system/kube-proxy-mc5tv"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966761    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/219bafbc-d807-44cf-9927-e4957f36ad70-tmp\") pod \"storage-provisioner\" (UID: \"219bafbc-d807-44cf-9927-e4957f36ad70\") " pod="kube-system/storage-provisioner"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966802    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0161d239-2d85-4246-b2fa-6c7374f2ecd6-cni-cfg\") pod \"kindnet-hhsxh\" (UID: \"0161d239-2d85-4246-b2fa-6c7374f2ecd6\") " pod="kube-system/kindnet-hhsxh"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966847    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0afe25f8-cbd6-412b-8698-7b547d1d49ca-xtables-lock\") pod \"kube-proxy-mc5tv\" (UID: \"0afe25f8-cbd6-412b-8698-7b547d1d49ca\") " pod="kube-system/kube-proxy-mc5tv"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966908    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0161d239-2d85-4246-b2fa-6c7374f2ecd6-xtables-lock\") pod \"kindnet-hhsxh\" (UID: \"0161d239-2d85-4246-b2fa-6c7374f2ecd6\") " pod="kube-system/kindnet-hhsxh"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966943    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0161d239-2d85-4246-b2fa-6c7374f2ecd6-lib-modules\") pod \"kindnet-hhsxh\" (UID: \"0161d239-2d85-4246-b2fa-6c7374f2ecd6\") " pod="kube-system/kindnet-hhsxh"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.968339    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.968477    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:49.468437755 +0000 UTC m=+6.779274091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.000742    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.000961    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.001575    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:49.501554367 +0000 UTC m=+6.812390603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.048369    1532 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c396fd459c503d2e9464c73cc841d3d8" path="/var/lib/kubelet/pods/c396fd459c503d2e9464c73cc841d3d8/volumes"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.051334    1532 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="decc1d942b4d81359bb79c0349ffe9bb" path="/var/lib/kubelet/pods/decc1d942b4d81359bb79c0349ffe9bb/volumes"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.248524    1532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-894400" podStartSLOduration=0.2483832 podCreationTimestamp="2024-03-18 13:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 13:09:49.21292898 +0000 UTC m=+6.523765316" watchObservedRunningTime="2024-03-18 13:09:49.2483832 +0000 UTC m=+6.559219436"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.285710    1532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-894400" podStartSLOduration=0.285684326 podCreationTimestamp="2024-03-18 13:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 13:09:49.252285313 +0000 UTC m=+6.563121649" watchObservedRunningTime="2024-03-18 13:09:49.285684326 +0000 UTC m=+6.596520662"
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.471617    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.472236    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:50.471713653 +0000 UTC m=+7.782549889 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.573240    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.573347    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.521430    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.573459    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:50.573441997 +0000 UTC m=+7.884278233 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.813611    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41035eff3b7db97e56ba25be141bf644acea4ddcb9d92ba3b421e6fa855a09af"
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: I0318 13:09:50.142572    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86d74dec812cfd4842de7473d45ff320bfb2283d09d2d05eee29e45cbde9f5e9"
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: I0318 13:09:50.219092    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9f21749669fe37a2d5bee9e7f45bf84eca6ae55870de8ac18bc774db7f81643"
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.481085    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.481271    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:52.48125246 +0000 UTC m=+9.792088696 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.581790    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.581835    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.581885    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:52.5818703 +0000 UTC m=+9.892706536 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:51 multinode-894400 kubelet[1532]: E0318 13:09:51.011273    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:51 multinode-894400 kubelet[1532]: E0318 13:09:51.012015    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.499973    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.500149    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:56.500131973 +0000 UTC m=+13.810968209 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.601982    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.602006    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.602087    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:56.602073317 +0000 UTC m=+13.912909553 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:53 multinode-894400 kubelet[1532]: E0318 13:09:53.009672    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:53 multinode-894400 kubelet[1532]: E0318 13:09:53.010317    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:55 multinode-894400 kubelet[1532]: E0318 13:09:55.010917    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:55 multinode-894400 kubelet[1532]: E0318 13:09:55.011786    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.539408    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.539534    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:10:04.539515204 +0000 UTC m=+21.850351440 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.639919    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.522445    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.639948    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.639998    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:10:04.639981843 +0000 UTC m=+21.950818079 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:09:57 multinode-894400 kubelet[1532]: E0318 13:09:57.009521    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:09:57 multinode-894400 kubelet[1532]: E0318 13:09:57.010257    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:09:59 multinode-894400 kubelet[1532]: E0318 13:09:59.011021    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:09:59 multinode-894400 kubelet[1532]: E0318 13:09:59.011361    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:01 multinode-894400 kubelet[1532]: E0318 13:10:01.009167    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:01 multinode-894400 kubelet[1532]: E0318 13:10:01.009678    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:03 multinode-894400 kubelet[1532]: E0318 13:10:03.010168    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:03 multinode-894400 kubelet[1532]: E0318 13:10:03.011736    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.603257    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.603387    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:10:20.60337037 +0000 UTC m=+37.914206606 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.704132    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.704169    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.704219    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:10:20.704204798 +0000 UTC m=+38.015041034 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:05 multinode-894400 kubelet[1532]: E0318 13:10:05.009461    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:05 multinode-894400 kubelet[1532]: E0318 13:10:05.010204    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:07 multinode-894400 kubelet[1532]: E0318 13:10:07.009925    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:07 multinode-894400 kubelet[1532]: E0318 13:10:07.010942    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:09 multinode-894400 kubelet[1532]: E0318 13:10:09.010506    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:09 multinode-894400 kubelet[1532]: E0318 13:10:09.011883    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:11 multinode-894400 kubelet[1532]: E0318 13:10:11.009145    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:11 multinode-894400 kubelet[1532]: E0318 13:10:11.011730    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.523419    2404 command_runner.go:130] > Mar 18 13:10:13 multinode-894400 kubelet[1532]: E0318 13:10:13.010103    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:13 multinode-894400 kubelet[1532]: E0318 13:10:13.010921    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:15 multinode-894400 kubelet[1532]: E0318 13:10:15.009361    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:15 multinode-894400 kubelet[1532]: E0318 13:10:15.010565    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:17 multinode-894400 kubelet[1532]: E0318 13:10:17.009688    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:17 multinode-894400 kubelet[1532]: E0318 13:10:17.010200    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:19 multinode-894400 kubelet[1532]: E0318 13:10:19.010187    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:19 multinode-894400 kubelet[1532]: E0318 13:10:19.010730    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.639546    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.639747    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:10:52.639723825 +0000 UTC m=+69.950560161 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.740353    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.740517    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.740585    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:10:52.740566824 +0000 UTC m=+70.051403160 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: E0318 13:10:21.010015    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: E0318 13:10:21.011108    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: I0318 13:10:21.647969    1532 scope.go:117] "RemoveContainer" containerID="a2c499223090cc38a7b425469621fb6c8dbc443ab7eb0d5841f1fdcea2922366"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: I0318 13:10:21.651387    1532 scope.go:117] "RemoveContainer" containerID="46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: E0318 13:10:21.652104    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(219bafbc-d807-44cf-9927-e4957f36ad70)\"" pod="kube-system/storage-provisioner" podUID="219bafbc-d807-44cf-9927-e4957f36ad70"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:23 multinode-894400 kubelet[1532]: E0318 13:10:23.010116    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:23 multinode-894400 kubelet[1532]: E0318 13:10:23.010816    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:23 multinode-894400 kubelet[1532]: I0318 13:10:23.777913    1532 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 kubelet[1532]: I0318 13:10:35.009532    1532 scope.go:117] "RemoveContainer" containerID="46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]: I0318 13:10:43.012571    1532 scope.go:117] "RemoveContainer" containerID="56d1819beb10ed198593d8a369f601faf82bf81ff1aecdbffe7114cd1265351b"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]: E0318 13:10:43.030354    1532 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 13:10:57.524415    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]: I0318 13:10:43.056417    1532 scope.go:117] "RemoveContainer" containerID="c51f768a2f642fdffc6de67f101be5abd8bbaec83ef13011b47efab5aad27134"
	I0318 13:10:57.561419    2404 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:10:57.561419    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:10:57.762906    2404 command_runner.go:130] > Name:               multinode-894400
	I0318 13:10:57.762906    2404 command_runner.go:130] > Roles:              control-plane
	I0318 13:10:57.762906    2404 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     kubernetes.io/hostname=multinode-894400
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     minikube.k8s.io/name=multinode-894400
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_47_29_0700
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0318 13:10:57.762906    2404 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 13:10:57.762906    2404 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 13:10:57.762906    2404 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:47:24 +0000
	I0318 13:10:57.762906    2404 command_runner.go:130] > Taints:             <none>
	I0318 13:10:57.762906    2404 command_runner.go:130] > Unschedulable:      false
	I0318 13:10:57.762906    2404 command_runner.go:130] > Lease:
	I0318 13:10:57.762906    2404 command_runner.go:130] >   HolderIdentity:  multinode-894400
	I0318 13:10:57.762906    2404 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 13:10:57.762906    2404 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 13:10:49 +0000
	I0318 13:10:57.762906    2404 command_runner.go:130] > Conditions:
	I0318 13:10:57.762906    2404 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0318 13:10:57.762906    2404 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0318 13:10:57.762906    2404 command_runner.go:130] >   MemoryPressure   False   Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0318 13:10:57.762906    2404 command_runner.go:130] >   DiskPressure     False   Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0318 13:10:57.762906    2404 command_runner.go:130] >   PIDPressure      False   Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0318 13:10:57.762906    2404 command_runner.go:130] >   Ready            True    Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 13:10:23 +0000   KubeletReady                 kubelet is posting ready status
	I0318 13:10:57.762906    2404 command_runner.go:130] > Addresses:
	I0318 13:10:57.762906    2404 command_runner.go:130] >   InternalIP:  172.30.130.156
	I0318 13:10:57.762906    2404 command_runner.go:130] >   Hostname:    multinode-894400
	I0318 13:10:57.762906    2404 command_runner.go:130] > Capacity:
	I0318 13:10:57.762906    2404 command_runner.go:130] >   cpu:                2
	I0318 13:10:57.762906    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:10:57.762906    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:10:57.762906    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:10:57.762906    2404 command_runner.go:130] >   pods:               110
	I0318 13:10:57.762906    2404 command_runner.go:130] > Allocatable:
	I0318 13:10:57.762906    2404 command_runner.go:130] >   cpu:                2
	I0318 13:10:57.762906    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:10:57.762906    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:10:57.762906    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:10:57.762906    2404 command_runner.go:130] >   pods:               110
	I0318 13:10:57.762906    2404 command_runner.go:130] > System Info:
	I0318 13:10:57.762906    2404 command_runner.go:130] >   Machine ID:                 80e7b822d2e94d26a09acd4a1bac452b
	I0318 13:10:57.762906    2404 command_runner.go:130] >   System UUID:                5c78c013-e4e8-1041-99c8-95cd760ef34f
	I0318 13:10:57.762906    2404 command_runner.go:130] >   Boot ID:                    a334ae39-1c10-417c-93ad-d28546d7793f
	I0318 13:10:57.763924    2404 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 13:10:57.763924    2404 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 13:10:57.763924    2404 command_runner.go:130] >   Operating System:           linux
	I0318 13:10:57.763924    2404 command_runner.go:130] >   Architecture:               amd64
	I0318 13:10:57.763999    2404 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 13:10:57.763999    2404 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 13:10:57.763999    2404 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 13:10:57.763999    2404 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0318 13:10:57.763999    2404 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0318 13:10:57.763999    2404 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0318 13:10:57.763999    2404 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 13:10:57.763999    2404 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0318 13:10:57.764087    2404 command_runner.go:130] >   default                     busybox-5b5d89c9d6-c2997                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0318 13:10:57.764087    2404 command_runner.go:130] >   kube-system                 coredns-5dd5756b68-456tm                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	I0318 13:10:57.764087    2404 command_runner.go:130] >   kube-system                 etcd-multinode-894400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         68s
	I0318 13:10:57.764144    2404 command_runner.go:130] >   kube-system                 kindnet-hhsxh                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	I0318 13:10:57.764144    2404 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-894400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	I0318 13:10:57.764196    2404 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-894400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0318 13:10:57.764196    2404 command_runner.go:130] >   kube-system                 kube-proxy-mc5tv                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0318 13:10:57.764196    2404 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-894400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0318 13:10:57.764196    2404 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0318 13:10:57.764248    2404 command_runner.go:130] > Allocated resources:
	I0318 13:10:57.764248    2404 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 13:10:57.764248    2404 command_runner.go:130] >   Resource           Requests     Limits
	I0318 13:10:57.764248    2404 command_runner.go:130] >   --------           --------     ------
	I0318 13:10:57.764248    2404 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0318 13:10:57.764301    2404 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0318 13:10:57.764301    2404 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0318 13:10:57.764301    2404 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0318 13:10:57.764301    2404 command_runner.go:130] > Events:
	I0318 13:10:57.764301    2404 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 13:10:57.764354    2404 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 13:10:57.764354    2404 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0318 13:10:57.764402    2404 command_runner.go:130] >   Normal  Starting                 66s                kube-proxy       
	I0318 13:10:57.764402    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	I0318 13:10:57.764402    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	I0318 13:10:57.764453    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	I0318 13:10:57.764453    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0318 13:10:57.764453    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m                kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	I0318 13:10:57.764453    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0318 13:10:57.764520    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m                kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	I0318 13:10:57.764520    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m                kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	I0318 13:10:57.764567    2404 command_runner.go:130] >   Normal  Starting                 23m                kubelet          Starting kubelet.
	I0318 13:10:57.764567    2404 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-894400 event: Registered Node multinode-894400 in Controller
	I0318 13:10:57.764609    2404 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-894400 status is now: NodeReady
	I0318 13:10:57.764609    2404 command_runner.go:130] >   Normal  Starting                 75s                kubelet          Starting kubelet.
	I0318 13:10:57.764609    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  74s (x8 over 75s)  kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	I0318 13:10:57.764657    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    74s (x8 over 75s)  kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	I0318 13:10:57.764657    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     74s (x7 over 75s)  kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	I0318 13:10:57.764657    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	I0318 13:10:57.764704    2404 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-894400 event: Registered Node multinode-894400 in Controller
	I0318 13:10:57.764704    2404 command_runner.go:130] > Name:               multinode-894400-m02
	I0318 13:10:57.764704    2404 command_runner.go:130] > Roles:              <none>
	I0318 13:10:57.764704    2404 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     kubernetes.io/hostname=multinode-894400-m02
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     minikube.k8s.io/name=multinode-894400
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_50_35_0700
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 13:10:57.764704    2404 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 13:10:57.764704    2404 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:50:34 +0000
	I0318 13:10:57.764704    2404 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 13:10:57.764704    2404 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 13:10:57.764704    2404 command_runner.go:130] > Unschedulable:      false
	I0318 13:10:57.764704    2404 command_runner.go:130] > Lease:
	I0318 13:10:57.764704    2404 command_runner.go:130] >   HolderIdentity:  multinode-894400-m02
	I0318 13:10:57.764704    2404 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 13:10:57.764704    2404 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 13:06:44 +0000
	I0318 13:10:57.764704    2404 command_runner.go:130] > Conditions:
	I0318 13:10:57.764704    2404 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 13:10:57.764704    2404 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 13:10:57.764704    2404 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:10:57.764704    2404 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:10:57.764704    2404 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:10:57.764704    2404 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:10:57.764704    2404 command_runner.go:130] > Addresses:
	I0318 13:10:57.764704    2404 command_runner.go:130] >   InternalIP:  172.30.140.66
	I0318 13:10:57.764704    2404 command_runner.go:130] >   Hostname:    multinode-894400-m02
	I0318 13:10:57.764704    2404 command_runner.go:130] > Capacity:
	I0318 13:10:57.764704    2404 command_runner.go:130] >   cpu:                2
	I0318 13:10:57.764704    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:10:57.764704    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:10:57.764704    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:10:57.764704    2404 command_runner.go:130] >   pods:               110
	I0318 13:10:57.764704    2404 command_runner.go:130] > Allocatable:
	I0318 13:10:57.765246    2404 command_runner.go:130] >   cpu:                2
	I0318 13:10:57.765246    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:10:57.765246    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:10:57.765246    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:10:57.765246    2404 command_runner.go:130] >   pods:               110
	I0318 13:10:57.765246    2404 command_runner.go:130] > System Info:
	I0318 13:10:57.765246    2404 command_runner.go:130] >   Machine ID:                 209753fe156d43e08ee40e815598ed17
	I0318 13:10:57.765246    2404 command_runner.go:130] >   System UUID:                fa19d46a-a3a2-9249-8c21-1edbfcedff01
	I0318 13:10:57.765246    2404 command_runner.go:130] >   Boot ID:                    0e15b7cf-29d6-40f7-ad78-fb04b10bea99
	I0318 13:10:57.765246    2404 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 13:10:57.765246    2404 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 13:10:57.765246    2404 command_runner.go:130] >   Operating System:           linux
	I0318 13:10:57.765246    2404 command_runner.go:130] >   Architecture:               amd64
	I0318 13:10:57.765246    2404 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 13:10:57.765246    2404 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 13:10:57.765246    2404 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 13:10:57.765246    2404 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0318 13:10:57.765451    2404 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0318 13:10:57.765451    2404 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0318 13:10:57.765451    2404 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 13:10:57.765451    2404 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0318 13:10:57.765451    2404 command_runner.go:130] >   default                     busybox-5b5d89c9d6-8btgf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0318 13:10:57.765451    2404 command_runner.go:130] >   kube-system                 kindnet-k5lpg               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	I0318 13:10:57.765451    2404 command_runner.go:130] >   kube-system                 kube-proxy-8bdmn            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0318 13:10:57.765451    2404 command_runner.go:130] > Allocated resources:
	I0318 13:10:57.765451    2404 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 13:10:57.765451    2404 command_runner.go:130] >   Resource           Requests   Limits
	I0318 13:10:57.765551    2404 command_runner.go:130] >   --------           --------   ------
	I0318 13:10:57.765551    2404 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0318 13:10:57.765551    2404 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0318 13:10:57.765551    2404 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 13:10:57.765551    2404 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 13:10:57.765551    2404 command_runner.go:130] > Events:
	I0318 13:10:57.765551    2404 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 13:10:57.765551    2404 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 13:10:57.765551    2404 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0318 13:10:57.765551    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x5 over 20m)  kubelet          Node multinode-894400-m02 status is now: NodeHasSufficientMemory
	I0318 13:10:57.765551    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x5 over 20m)  kubelet          Node multinode-894400-m02 status is now: NodeHasNoDiskPressure
	I0318 13:10:57.765645    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x5 over 20m)  kubelet          Node multinode-894400-m02 status is now: NodeHasSufficientPID
	I0318 13:10:57.765645    2404 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller
	I0318 13:10:57.765645    2404 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-894400-m02 status is now: NodeReady
	I0318 13:10:57.765645    2404 command_runner.go:130] >   Normal  RegisteredNode           56s                node-controller  Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller
	I0318 13:10:57.765645    2404 command_runner.go:130] >   Normal  NodeNotReady             16s                node-controller  Node multinode-894400-m02 status is now: NodeNotReady
	I0318 13:10:57.765645    2404 command_runner.go:130] > Name:               multinode-894400-m03
	I0318 13:10:57.765645    2404 command_runner.go:130] > Roles:              <none>
	I0318 13:10:57.765645    2404 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 13:10:57.765645    2404 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 13:10:57.765729    2404 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 13:10:57.765729    2404 command_runner.go:130] >                     kubernetes.io/hostname=multinode-894400-m03
	I0318 13:10:57.765729    2404 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 13:10:57.765729    2404 command_runner.go:130] >                     minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	I0318 13:10:57.765867    2404 command_runner.go:130] >                     minikube.k8s.io/name=multinode-894400
	I0318 13:10:57.765885    2404 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 13:10:57.765885    2404 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T13_05_26_0700
	I0318 13:10:57.765885    2404 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 13:10:57.765885    2404 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 13:10:57.765885    2404 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 13:10:57.765885    2404 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 13:10:57.765885    2404 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 13:05:25 +0000
	I0318 13:10:57.765885    2404 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 13:10:57.765885    2404 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 13:10:57.765885    2404 command_runner.go:130] > Unschedulable:      false
	I0318 13:10:57.765885    2404 command_runner.go:130] > Lease:
	I0318 13:10:57.765885    2404 command_runner.go:130] >   HolderIdentity:  multinode-894400-m03
	I0318 13:10:57.766064    2404 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 13:10:57.766064    2404 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 13:06:27 +0000
	I0318 13:10:57.766064    2404 command_runner.go:130] > Conditions:
	I0318 13:10:57.766064    2404 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 13:10:57.766064    2404 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 13:10:57.766064    2404 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:10:57.766064    2404 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:10:57.766064    2404 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:10:57.766199    2404 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:10:57.766199    2404 command_runner.go:130] > Addresses:
	I0318 13:10:57.766199    2404 command_runner.go:130] >   InternalIP:  172.30.137.140
	I0318 13:10:57.766199    2404 command_runner.go:130] >   Hostname:    multinode-894400-m03
	I0318 13:10:57.766199    2404 command_runner.go:130] > Capacity:
	I0318 13:10:57.766199    2404 command_runner.go:130] >   cpu:                2
	I0318 13:10:57.766199    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:10:57.766199    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:10:57.766285    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:10:57.766285    2404 command_runner.go:130] >   pods:               110
	I0318 13:10:57.766285    2404 command_runner.go:130] > Allocatable:
	I0318 13:10:57.766285    2404 command_runner.go:130] >   cpu:                2
	I0318 13:10:57.766319    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:10:57.766319    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:10:57.766375    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:10:57.766375    2404 command_runner.go:130] >   pods:               110
	I0318 13:10:57.766422    2404 command_runner.go:130] > System Info:
	I0318 13:10:57.766440    2404 command_runner.go:130] >   Machine ID:                 f96e7421441b46c0a5836e2d53b26708
	I0318 13:10:57.766440    2404 command_runner.go:130] >   System UUID:                7dae14c5-92ae-d842-8ce6-c446c0352eb2
	I0318 13:10:57.766440    2404 command_runner.go:130] >   Boot ID:                    7ef4b157-1893-48d2-9b87-d5f210c11477
	I0318 13:10:57.766440    2404 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 13:10:57.766440    2404 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 13:10:57.766541    2404 command_runner.go:130] >   Operating System:           linux
	I0318 13:10:57.766541    2404 command_runner.go:130] >   Architecture:               amd64
	I0318 13:10:57.766541    2404 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 13:10:57.766541    2404 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 13:10:57.766541    2404 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 13:10:57.766541    2404 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0318 13:10:57.766541    2404 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0318 13:10:57.766600    2404 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0318 13:10:57.766600    2404 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 13:10:57.766600    2404 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0318 13:10:57.766600    2404 command_runner.go:130] >   kube-system                 kindnet-zv9tv       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	I0318 13:10:57.766684    2404 command_runner.go:130] >   kube-system                 kube-proxy-745w9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	I0318 13:10:57.766708    2404 command_runner.go:130] > Allocated resources:
	I0318 13:10:57.766708    2404 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 13:10:57.766708    2404 command_runner.go:130] >   Resource           Requests   Limits
	I0318 13:10:57.766708    2404 command_runner.go:130] >   --------           --------   ------
	I0318 13:10:57.766708    2404 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0318 13:10:57.766708    2404 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0318 13:10:57.766708    2404 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 13:10:57.766838    2404 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 13:10:57.766838    2404 command_runner.go:130] > Events:
	I0318 13:10:57.766879    2404 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0318 13:10:57.766879    2404 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0318 13:10:57.766909    2404 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0318 13:10:57.766909    2404 command_runner.go:130] >   Normal  Starting                 5m29s                  kube-proxy       
	I0318 13:10:57.766909    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  15m (x5 over 15m)      kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientMemory
	I0318 13:10:57.766909    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    15m (x5 over 15m)      kubelet          Node multinode-894400-m03 status is now: NodeHasNoDiskPressure
	I0318 13:10:57.766972    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     15m (x5 over 15m)      kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientPID
	I0318 13:10:57.767002    2404 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-894400-m03 status is now: NodeReady
	I0318 13:10:57.767030    2404 command_runner.go:130] >   Normal  Starting                 5m32s                  kubelet          Starting kubelet.
	I0318 13:10:57.767030    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m32s (x2 over 5m32s)  kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientMemory
	I0318 13:10:57.767030    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m32s (x2 over 5m32s)  kubelet          Node multinode-894400-m03 status is now: NodeHasNoDiskPressure
	I0318 13:10:57.767030    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m32s (x2 over 5m32s)  kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientPID
	I0318 13:10:57.767030    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m32s                  kubelet          Updated Node Allocatable limit across pods
	I0318 13:10:57.767030    2404 command_runner.go:130] >   Normal  RegisteredNode           5m31s                  node-controller  Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller
	I0318 13:10:57.767030    2404 command_runner.go:130] >   Normal  NodeReady                5m23s                  kubelet          Node multinode-894400-m03 status is now: NodeReady
	I0318 13:10:57.767030    2404 command_runner.go:130] >   Normal  NodeNotReady             3m46s                  node-controller  Node multinode-894400-m03 status is now: NodeNotReady
	I0318 13:10:57.767030    2404 command_runner.go:130] >   Normal  RegisteredNode           56s                    node-controller  Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller
	I0318 13:10:57.776599    2404 logs.go:123] Gathering logs for kube-controller-manager [7aa5cf4ec378] ...
	I0318 13:10:57.776599    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa5cf4ec378"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:22.447675       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:22.964394       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:22.964509       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:22.966671       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:22.967091       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:22.968348       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:22.969286       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.391471       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.423488       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.424256       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.424289       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.424374       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.451725       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.451967       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.452425       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.464873       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.465150       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.465172       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:27.491949       1 shared_informer.go:318] Caches are synced for tokens
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.491900       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.492009       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.492602       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.492659       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 13:10:57.811794    2404 command_runner.go:130] ! E0318 12:47:37.494780       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.494859       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.511992       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.512162       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.512576       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.525022       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.525273       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.525287       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.540701       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 13:10:57.811794    2404 command_runner.go:130] ! I0318 12:47:37.540905       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 13:10:57.813103    2404 command_runner.go:130] ! I0318 12:47:37.540914       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 13:10:57.813436    2404 command_runner.go:130] ! I0318 12:47:37.562000       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 13:10:57.813506    2404 command_runner.go:130] ! I0318 12:47:37.562256       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 13:10:57.813506    2404 command_runner.go:130] ! I0318 12:47:37.562286       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 13:10:57.813506    2404 command_runner.go:130] ! I0318 12:47:37.574397       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 13:10:57.813506    2404 command_runner.go:130] ! I0318 12:47:37.574869       1 expand_controller.go:328] "Starting expand controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.574937       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.587914       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.588166       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.588199       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.609721       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.615354       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.615371       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.624660       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.624898       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.625063       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.637461       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.637588       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.637699       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.649314       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.650380       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.650462       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.830447       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.830565       1 disruption.go:433] "Sending events to api server."
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.830686       1 disruption.go:444] "Starting disruption controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.830725       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.985254       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.985453       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:37.985784       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:38.288543       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:38.289132       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:38.289248       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:38.289520       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:38.289722       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:38.289927       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:38.290240       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 13:10:57.815129    2404 command_runner.go:130] ! I0318 12:47:38.290340       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 13:10:57.815827    2404 command_runner.go:130] ! I0318 12:47:38.290418       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 13:10:57.815827    2404 command_runner.go:130] ! I0318 12:47:38.290502       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 13:10:57.815827    2404 command_runner.go:130] ! I0318 12:47:38.290550       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 13:10:57.815827    2404 command_runner.go:130] ! I0318 12:47:38.290591       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 13:10:57.815827    2404 command_runner.go:130] ! I0318 12:47:38.290851       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 13:10:57.815827    2404 command_runner.go:130] ! I0318 12:47:38.291026       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 13:10:57.816005    2404 command_runner.go:130] ! I0318 12:47:38.291117       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 13:10:57.816005    2404 command_runner.go:130] ! I0318 12:47:38.291149       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 13:10:57.816005    2404 command_runner.go:130] ! I0318 12:47:38.291277       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 13:10:57.816005    2404 command_runner.go:130] ! I0318 12:47:38.291315       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 13:10:57.816005    2404 command_runner.go:130] ! I0318 12:47:38.291392       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 13:10:57.816109    2404 command_runner.go:130] ! I0318 12:47:38.291423       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 13:10:57.816109    2404 command_runner.go:130] ! I0318 12:47:38.291465       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 13:10:57.816109    2404 command_runner.go:130] ! I0318 12:47:38.291591       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 13:10:57.816181    2404 command_runner.go:130] ! I0318 12:47:38.291607       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:10:57.816181    2404 command_runner.go:130] ! I0318 12:47:38.291720       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 13:10:57.816181    2404 command_runner.go:130] ! I0318 12:47:38.436018       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 13:10:57.816181    2404 command_runner.go:130] ! I0318 12:47:38.436093       1 job_controller.go:226] "Starting job controller"
	I0318 13:10:57.816181    2404 command_runner.go:130] ! I0318 12:47:38.436112       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 13:10:57.816271    2404 command_runner.go:130] ! I0318 12:47:38.731490       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 13:10:57.816271    2404 command_runner.go:130] ! I0318 12:47:38.731606       1 horizontal.go:200] "Starting HPA controller"
	I0318 13:10:57.816271    2404 command_runner.go:130] ! I0318 12:47:38.731671       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 13:10:57.816371    2404 command_runner.go:130] ! I0318 12:47:38.886224       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 13:10:57.816438    2404 command_runner.go:130] ! I0318 12:47:38.886401       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 13:10:57.816438    2404 command_runner.go:130] ! I0318 12:47:38.886705       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 13:10:57.816473    2404 command_runner.go:130] ! I0318 12:47:38.930325       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 13:10:57.816473    2404 command_runner.go:130] ! I0318 12:47:38.930354       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 13:10:57.816617    2404 command_runner.go:130] ! I0318 12:47:38.930362       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 13:10:57.816654    2404 command_runner.go:130] ! I0318 12:47:38.930398       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 13:10:57.816654    2404 command_runner.go:130] ! I0318 12:47:39.085782       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 13:10:57.816654    2404 command_runner.go:130] ! I0318 12:47:39.085905       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 13:10:57.816654    2404 command_runner.go:130] ! I0318 12:47:39.085920       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 13:10:57.816720    2404 command_runner.go:130] ! I0318 12:47:39.236755       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 13:10:57.816720    2404 command_runner.go:130] ! I0318 12:47:39.237434       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 13:10:57.816720    2404 command_runner.go:130] ! I0318 12:47:39.237522       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 13:10:57.816720    2404 command_runner.go:130] ! I0318 12:47:39.390953       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 13:10:57.816720    2404 command_runner.go:130] ! I0318 12:47:39.391480       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 13:10:57.816720    2404 command_runner.go:130] ! I0318 12:47:39.391646       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 13:10:57.816720    2404 command_runner.go:130] ! I0318 12:47:39.535570       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 13:10:57.816815    2404 command_runner.go:130] ! I0318 12:47:39.536071       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 13:10:57.816815    2404 command_runner.go:130] ! I0318 12:47:39.536172       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 13:10:57.816815    2404 command_runner.go:130] ! I0318 12:47:39.582776       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 13:10:57.816815    2404 command_runner.go:130] ! I0318 12:47:39.582876       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 13:10:57.816928    2404 command_runner.go:130] ! I0318 12:47:39.582912       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:10:57.816967    2404 command_runner.go:130] ! I0318 12:47:39.584602       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 13:10:57.816967    2404 command_runner.go:130] ! I0318 12:47:39.584677       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 13:10:57.816967    2404 command_runner.go:130] ! I0318 12:47:39.584724       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:10:57.817036    2404 command_runner.go:130] ! I0318 12:47:39.585974       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 13:10:57.817036    2404 command_runner.go:130] ! I0318 12:47:39.585990       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 13:10:57.817036    2404 command_runner.go:130] ! I0318 12:47:39.586012       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:10:57.817036    2404 command_runner.go:130] ! I0318 12:47:39.586910       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 13:10:57.817115    2404 command_runner.go:130] ! I0318 12:47:39.586968       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 13:10:57.817115    2404 command_runner.go:130] ! I0318 12:47:39.586975       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 13:10:57.817115    2404 command_runner.go:130] ! I0318 12:47:39.587044       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:10:57.817115    2404 command_runner.go:130] ! I0318 12:47:39.735265       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 13:10:57.817115    2404 command_runner.go:130] ! I0318 12:47:39.735467       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 13:10:57.817193    2404 command_runner.go:130] ! I0318 12:47:39.735494       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 13:10:57.817193    2404 command_runner.go:130] ! I0318 12:47:39.735502       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 13:10:57.817193    2404 command_runner.go:130] ! I0318 12:47:39.783594       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 13:10:57.817193    2404 command_runner.go:130] ! I0318 12:47:39.783722       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 13:10:57.817193    2404 command_runner.go:130] ! I0318 12:47:39.783841       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 13:10:57.817301    2404 command_runner.go:130] ! I0318 12:47:39.783860       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 13:10:57.817301    2404 command_runner.go:130] ! I0318 12:47:39.784031       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 13:10:57.817301    2404 command_runner.go:130] ! E0318 12:47:39.937206       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 13:10:57.817301    2404 command_runner.go:130] ! I0318 12:47:39.937229       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 13:10:57.817404    2404 command_runner.go:130] ! I0318 12:47:40.089508       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 13:10:57.817404    2404 command_runner.go:130] ! I0318 12:47:40.089701       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 13:10:57.817404    2404 command_runner.go:130] ! I0318 12:47:40.089793       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 13:10:57.817472    2404 command_runner.go:130] ! I0318 12:47:40.235860       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 13:10:57.817472    2404 command_runner.go:130] ! I0318 12:47:40.235977       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 13:10:57.817472    2404 command_runner.go:130] ! I0318 12:47:40.236063       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 13:10:57.817472    2404 command_runner.go:130] ! I0318 12:47:40.386545       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 13:10:57.817472    2404 command_runner.go:130] ! I0318 12:47:40.386692       1 gc_controller.go:101] "Starting GC controller"
	I0318 13:10:57.817472    2404 command_runner.go:130] ! I0318 12:47:40.386704       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 13:10:57.817637    2404 command_runner.go:130] ! I0318 12:47:40.644175       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 13:10:57.817637    2404 command_runner.go:130] ! I0318 12:47:40.644284       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 13:10:57.817637    2404 command_runner.go:130] ! I0318 12:47:40.644293       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 13:10:57.817637    2404 command_runner.go:130] ! I0318 12:47:40.784991       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 13:10:57.817637    2404 command_runner.go:130] ! I0318 12:47:40.785464       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 13:10:57.817703    2404 command_runner.go:130] ! I0318 12:47:40.785492       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 13:10:57.817703    2404 command_runner.go:130] ! I0318 12:47:40.936785       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 13:10:57.817703    2404 command_runner.go:130] ! I0318 12:47:40.939800       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 13:10:57.817770    2404 command_runner.go:130] ! I0318 12:47:40.947184       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:10:57.817770    2404 command_runner.go:130] ! I0318 12:47:40.968017       1 shared_informer.go:318] Caches are synced for service account
	I0318 13:10:57.817770    2404 command_runner.go:130] ! I0318 12:47:40.971773       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:10:57.817770    2404 command_runner.go:130] ! I0318 12:47:40.976691       1 shared_informer.go:318] Caches are synced for expand
	I0318 13:10:57.817770    2404 command_runner.go:130] ! I0318 12:47:40.986014       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 13:10:57.817770    2404 command_runner.go:130] ! I0318 12:47:40.995675       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 13:10:57.817836    2404 command_runner.go:130] ! I0318 12:47:41.009015       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400\" does not exist"
	I0318 13:10:57.817836    2404 command_runner.go:130] ! I0318 12:47:41.012612       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 13:10:57.817836    2404 command_runner.go:130] ! I0318 12:47:41.016383       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 13:10:57.817836    2404 command_runner.go:130] ! I0318 12:47:41.025198       1 shared_informer.go:318] Caches are synced for TTL
	I0318 13:10:57.817921    2404 command_runner.go:130] ! I0318 12:47:41.025462       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 13:10:57.817921    2404 command_runner.go:130] ! I0318 12:47:41.032086       1 shared_informer.go:318] Caches are synced for HPA
	I0318 13:10:57.817921    2404 command_runner.go:130] ! I0318 12:47:41.036463       1 shared_informer.go:318] Caches are synced for job
	I0318 13:10:57.817921    2404 command_runner.go:130] ! I0318 12:47:41.036622       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 13:10:57.817921    2404 command_runner.go:130] ! I0318 12:47:41.036726       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 13:10:57.817992    2404 command_runner.go:130] ! I0318 12:47:41.037735       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 13:10:57.817992    2404 command_runner.go:130] ! I0318 12:47:41.037818       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 13:10:57.817992    2404 command_runner.go:130] ! I0318 12:47:41.040360       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 13:10:57.818064    2404 command_runner.go:130] ! I0318 12:47:41.041850       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 13:10:57.818064    2404 command_runner.go:130] ! I0318 12:47:41.045379       1 shared_informer.go:318] Caches are synced for namespace
	I0318 13:10:57.818064    2404 command_runner.go:130] ! I0318 12:47:41.051530       1 shared_informer.go:318] Caches are synced for deployment
	I0318 13:10:57.818064    2404 command_runner.go:130] ! I0318 12:47:41.053151       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 13:10:57.818064    2404 command_runner.go:130] ! I0318 12:47:41.063027       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 13:10:57.818064    2404 command_runner.go:130] ! I0318 12:47:41.084212       1 shared_informer.go:318] Caches are synced for taint
	I0318 13:10:57.818135    2404 command_runner.go:130] ! I0318 12:47:41.084612       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 13:10:57.818135    2404 command_runner.go:130] ! I0318 12:47:41.087983       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 13:10:57.818135    2404 command_runner.go:130] ! I0318 12:47:41.088464       1 taint_manager.go:210] "Sending events to api server"
	I0318 13:10:57.818345    2404 command_runner.go:130] ! I0318 12:47:41.089485       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400"
	I0318 13:10:57.818429    2404 command_runner.go:130] ! I0318 12:47:41.089526       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0318 13:10:57.818429    2404 command_runner.go:130] ! I0318 12:47:41.089552       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 13:10:57.818429    2404 command_runner.go:130] ! I0318 12:47:41.089942       1 shared_informer.go:318] Caches are synced for GC
	I0318 13:10:57.818513    2404 command_runner.go:130] ! I0318 12:47:41.090031       1 event.go:307] "Event occurred" object="multinode-894400" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400 event: Registered Node multinode-894400 in Controller"
	I0318 13:10:57.818601    2404 command_runner.go:130] ! I0318 12:47:41.090167       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 13:10:57.818669    2404 command_runner.go:130] ! I0318 12:47:41.090848       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 13:10:57.818669    2404 command_runner.go:130] ! I0318 12:47:41.092093       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 13:10:57.818669    2404 command_runner.go:130] ! I0318 12:47:41.092684       1 shared_informer.go:318] Caches are synced for node
	I0318 13:10:57.818669    2404 command_runner.go:130] ! I0318 12:47:41.093255       1 range_allocator.go:174] "Sending events to api server"
	I0318 13:10:57.818669    2404 command_runner.go:130] ! I0318 12:47:41.093537       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 13:10:57.818738    2404 command_runner.go:130] ! I0318 12:47:41.093851       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 13:10:57.818738    2404 command_runner.go:130] ! I0318 12:47:41.093958       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 13:10:57.818738    2404 command_runner.go:130] ! I0318 12:47:41.119414       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400" podCIDRs=["10.244.0.0/24"]
	I0318 13:10:57.818738    2404 command_runner.go:130] ! I0318 12:47:41.148134       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:10:57.818738    2404 command_runner.go:130] ! I0318 12:47:41.183853       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 13:10:57.818809    2404 command_runner.go:130] ! I0318 12:47:41.184949       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 13:10:57.818809    2404 command_runner.go:130] ! I0318 12:47:41.186043       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 13:10:57.818809    2404 command_runner.go:130] ! I0318 12:47:41.187192       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 13:10:57.818809    2404 command_runner.go:130] ! I0318 12:47:41.187229       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 13:10:57.818933    2404 command_runner.go:130] ! I0318 12:47:41.192066       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:10:57.818933    2404 command_runner.go:130] ! I0318 12:47:41.233781       1 shared_informer.go:318] Caches are synced for disruption
	I0318 13:10:57.818933    2404 command_runner.go:130] ! I0318 12:47:41.572914       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:10:57.818933    2404 command_runner.go:130] ! I0318 12:47:41.612936       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mc5tv"
	I0318 13:10:57.819040    2404 command_runner.go:130] ! I0318 12:47:41.615780       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hhsxh"
	I0318 13:10:57.819040    2404 command_runner.go:130] ! I0318 12:47:41.625871       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:10:57.819040    2404 command_runner.go:130] ! I0318 12:47:41.626335       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 13:10:57.819116    2404 command_runner.go:130] ! I0318 12:47:41.893141       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0318 13:10:57.819116    2404 command_runner.go:130] ! I0318 12:47:42.112244       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vl6jr"
	I0318 13:10:57.819206    2404 command_runner.go:130] ! I0318 12:47:42.148022       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-456tm"
	I0318 13:10:57.819206    2404 command_runner.go:130] ! I0318 12:47:42.181940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="289.6659ms"
	I0318 13:10:57.819206    2404 command_runner.go:130] ! I0318 12:47:42.245823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.840303ms"
	I0318 13:10:57.819206    2404 command_runner.go:130] ! I0318 12:47:42.246151       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.996µs"
	I0318 13:10:57.819297    2404 command_runner.go:130] ! I0318 12:47:42.470958       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0318 13:10:57.819297    2404 command_runner.go:130] ! I0318 12:47:42.530265       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-vl6jr"
	I0318 13:10:57.819297    2404 command_runner.go:130] ! I0318 12:47:42.551794       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.491503ms"
	I0318 13:10:57.819297    2404 command_runner.go:130] ! I0318 12:47:42.587026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="35.184179ms"
	I0318 13:10:57.819297    2404 command_runner.go:130] ! I0318 12:47:42.587126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.497µs"
	I0318 13:10:57.819427    2404 command_runner.go:130] ! I0318 12:47:52.958102       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="163.297µs"
	I0318 13:10:57.819427    2404 command_runner.go:130] ! I0318 12:47:52.991751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.399µs"
	I0318 13:10:57.819427    2404 command_runner.go:130] ! I0318 12:47:54.194916       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.289µs"
	I0318 13:10:57.819427    2404 command_runner.go:130] ! I0318 12:47:55.238088       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.595936ms"
	I0318 13:10:57.819502    2404 command_runner.go:130] ! I0318 12:47:55.238222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="45.592µs"
	I0318 13:10:57.819502    2404 command_runner.go:130] ! I0318 12:47:56.090728       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0318 13:10:57.819502    2404 command_runner.go:130] ! I0318 12:50:34.419485       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m02\" does not exist"
	I0318 13:10:57.819608    2404 command_runner.go:130] ! I0318 12:50:34.437576       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m02" podCIDRs=["10.244.1.0/24"]
	I0318 13:10:57.819608    2404 command_runner.go:130] ! I0318 12:50:34.454919       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8bdmn"
	I0318 13:10:57.819608    2404 command_runner.go:130] ! I0318 12:50:34.479103       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k5lpg"
	I0318 13:10:57.819678    2404 command_runner.go:130] ! I0318 12:50:36.121925       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m02"
	I0318 13:10:57.819678    2404 command_runner.go:130] ! I0318 12:50:36.122368       1 event.go:307] "Event occurred" object="multinode-894400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller"
	I0318 13:10:57.819678    2404 command_runner.go:130] ! I0318 12:50:52.539955       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.819750    2404 command_runner.go:130] ! I0318 12:51:17.964827       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0318 13:10:57.819750    2404 command_runner.go:130] ! I0318 12:51:17.986964       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-8btgf"
	I0318 13:10:57.819836    2404 command_runner.go:130] ! I0318 12:51:18.004592       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-c2997"
	I0318 13:10:57.819836    2404 command_runner.go:130] ! I0318 12:51:18.026894       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="62.79508ms"
	I0318 13:10:57.819836    2404 command_runner.go:130] ! I0318 12:51:18.045074       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="17.513513ms"
	I0318 13:10:57.819836    2404 command_runner.go:130] ! I0318 12:51:18.046404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="36.101µs"
	I0318 13:10:57.819836    2404 command_runner.go:130] ! I0318 12:51:18.054157       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="337.914µs"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:51:18.060516       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="26.701µs"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:51:20.804047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="10.125602ms"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:51:20.804333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="159.502µs"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:51:21.064706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="11.788417ms"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:51:21.065229       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="82.401µs"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:55:05.793350       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m03\" does not exist"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:55:05.797095       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:55:05.823205       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zv9tv"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:55:05.835101       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m03" podCIDRs=["10.244.2.0/24"]
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:55:05.835149       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-745w9"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:55:06.188986       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:55:06.188988       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m03"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 12:55:23.671742       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:02:46.325539       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:02:46.325935       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-894400-m03 status is now: NodeNotReady"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:02:46.344510       1 event.go:307] "Event occurred" object="kube-system/kindnet-zv9tv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:02:46.368811       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-745w9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:05:19.649225       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:05:21.403124       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-894400-m03 event: Removing Node multinode-894400-m03 from Controller"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:05:25.832056       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m03\" does not exist"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:05:25.832348       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:05:25.841443       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m03" podCIDRs=["10.244.3.0/24"]
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:05:26.404299       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller"
	I0318 13:10:57.819945    2404 command_runner.go:130] ! I0318 13:05:34.080951       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.820535    2404 command_runner.go:130] ! I0318 13:07:11.961036       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-894400-m03 status is now: NodeNotReady"
	I0318 13:10:57.820535    2404 command_runner.go:130] ! I0318 13:07:11.961077       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:10:57.820535    2404 command_runner.go:130] ! I0318 13:07:12.051526       1 event.go:307] "Event occurred" object="kube-system/kindnet-zv9tv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:10:57.820535    2404 command_runner.go:130] ! I0318 13:07:12.098168       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-745w9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:10:57.839272    2404 logs.go:123] Gathering logs for dmesg ...
	I0318 13:10:57.839272    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:10:57.860537    2404 command_runner.go:130] > [Mar18 13:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0318 13:10:57.860580    2404 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0318 13:10:57.860580    2404 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0318 13:10:57.860580    2404 command_runner.go:130] > [  +0.127438] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0318 13:10:57.860580    2404 command_runner.go:130] > [  +0.022457] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0318 13:10:57.860580    2404 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0318 13:10:57.860724    2404 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0318 13:10:57.860724    2404 command_runner.go:130] > [  +0.054196] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0318 13:10:57.860724    2404 command_runner.go:130] > [  +0.018424] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0318 13:10:57.860764    2404 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0318 13:10:57.860764    2404 command_runner.go:130] > [  +4.800453] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0318 13:10:57.860812    2404 command_runner.go:130] > [  +1.267636] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0318 13:10:57.860812    2404 command_runner.go:130] > [  +1.056053] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	I0318 13:10:57.860812    2404 command_runner.go:130] > [  +6.778211] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0318 13:10:57.860848    2404 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0318 13:10:57.860848    2404 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0318 13:10:57.860848    2404 command_runner.go:130] > [Mar18 13:09] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	I0318 13:10:57.860895    2404 command_runner.go:130] > [  +0.160643] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	I0318 13:10:57.860895    2404 command_runner.go:130] > [ +25.236158] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0318 13:10:57.860930    2404 command_runner.go:130] > [  +0.093711] kauditd_printk_skb: 73 callbacks suppressed
	I0318 13:10:57.860998    2404 command_runner.go:130] > [  +0.488652] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	I0318 13:10:57.860998    2404 command_runner.go:130] > [  +0.198307] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	I0318 13:10:57.860998    2404 command_runner.go:130] > [  +0.213157] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	I0318 13:10:57.860998    2404 command_runner.go:130] > [  +2.866452] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	I0318 13:10:57.861051    2404 command_runner.go:130] > [  +0.191537] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	I0318 13:10:57.861051    2404 command_runner.go:130] > [  +0.163904] systemd-fstab-generator[1255]: Ignoring "noauto" option for root device
	I0318 13:10:57.861092    2404 command_runner.go:130] > [  +0.280650] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	I0318 13:10:57.861092    2404 command_runner.go:130] > [  +0.822319] systemd-fstab-generator[1393]: Ignoring "noauto" option for root device
	I0318 13:10:57.861092    2404 command_runner.go:130] > [  +0.094744] kauditd_printk_skb: 205 callbacks suppressed
	I0318 13:10:57.861127    2404 command_runner.go:130] > [  +3.177820] systemd-fstab-generator[1525]: Ignoring "noauto" option for root device
	I0318 13:10:57.861127    2404 command_runner.go:130] > [  +1.898187] kauditd_printk_skb: 64 callbacks suppressed
	I0318 13:10:57.861127    2404 command_runner.go:130] > [  +5.227041] kauditd_printk_skb: 10 callbacks suppressed
	I0318 13:10:57.861175    2404 command_runner.go:130] > [  +4.065141] systemd-fstab-generator[3089]: Ignoring "noauto" option for root device
	I0318 13:10:57.861175    2404 command_runner.go:130] > [Mar18 13:10] kauditd_printk_skb: 70 callbacks suppressed
	I0318 13:11:00.382100    2404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:11:00.407835    2404 command_runner.go:130] > 1904
	I0318 13:11:00.407943    2404 api_server.go:72] duration metric: took 1m6.7492667s to wait for apiserver process to appear ...
	I0318 13:11:00.407943    2404 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:11:00.420009    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:11:00.446185    2404 command_runner.go:130] > fc4430c7fa20
	I0318 13:11:00.446185    2404 logs.go:276] 1 containers: [fc4430c7fa20]
	I0318 13:11:00.455344    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:11:00.478051    2404 command_runner.go:130] > 5f0887d1e691
	I0318 13:11:00.479399    2404 logs.go:276] 1 containers: [5f0887d1e691]
	I0318 13:11:00.487853    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:11:00.511810    2404 command_runner.go:130] > 3c3bc988c74c
	I0318 13:11:00.511840    2404 command_runner.go:130] > 693a64f7472f
	I0318 13:11:00.511840    2404 logs.go:276] 2 containers: [3c3bc988c74c 693a64f7472f]
	I0318 13:11:00.520471    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:11:00.543394    2404 command_runner.go:130] > 66ee8be9fada
	I0318 13:11:00.543394    2404 command_runner.go:130] > e4d42739ce0e
	I0318 13:11:00.543394    2404 logs.go:276] 2 containers: [66ee8be9fada e4d42739ce0e]
	I0318 13:11:00.552571    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:11:00.573605    2404 command_runner.go:130] > 163ccabc3882
	I0318 13:11:00.573605    2404 command_runner.go:130] > 9335855aab63
	I0318 13:11:00.574793    2404 logs.go:276] 2 containers: [163ccabc3882 9335855aab63]
	I0318 13:11:00.585270    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:11:00.611801    2404 command_runner.go:130] > 4ad6784a187d
	I0318 13:11:00.611801    2404 command_runner.go:130] > 7aa5cf4ec378
	I0318 13:11:00.611801    2404 logs.go:276] 2 containers: [4ad6784a187d 7aa5cf4ec378]
	I0318 13:11:00.620758    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:11:00.644616    2404 command_runner.go:130] > c8e5ec25e910
	I0318 13:11:00.644798    2404 command_runner.go:130] > c4d7018ad23a
	I0318 13:11:00.644798    2404 logs.go:276] 2 containers: [c8e5ec25e910 c4d7018ad23a]
	I0318 13:11:00.644798    2404 logs.go:123] Gathering logs for kindnet [c4d7018ad23a] ...
	I0318 13:11:00.644798    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d7018ad23a"
	I0318 13:11:00.677915    2404 command_runner.go:130] ! I0318 12:56:20.031595       1 main.go:227] handling current node
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:20.031610       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:20.031618       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:20.031800       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:20.031837       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:30.038705       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:30.038812       1 main.go:227] handling current node
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:30.038826       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:30.038833       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:30.039027       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:30.039347       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:40.051950       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:40.052053       1 main.go:227] handling current node
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:40.052086       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:40.052204       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:40.052568       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:40.052681       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:50.074059       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:50.074164       1 main.go:227] handling current node
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:50.074183       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:50.074192       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:50.075009       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:56:50.075306       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:57:00.089286       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:57:00.089382       1 main.go:227] handling current node
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:57:00.089397       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:57:00.089405       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:57:00.089918       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:57:00.089934       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:57:10.103457       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:57:10.103575       1 main.go:227] handling current node
	I0318 13:11:00.680123    2404 command_runner.go:130] ! I0318 12:57:10.103607       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680647    2404 command_runner.go:130] ! I0318 12:57:10.103704       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680647    2404 command_runner.go:130] ! I0318 12:57:10.104106       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680647    2404 command_runner.go:130] ! I0318 12:57:10.104144       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680647    2404 command_runner.go:130] ! I0318 12:57:20.111225       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680692    2404 command_runner.go:130] ! I0318 12:57:20.111346       1 main.go:227] handling current node
	I0318 13:11:00.680692    2404 command_runner.go:130] ! I0318 12:57:20.111360       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680692    2404 command_runner.go:130] ! I0318 12:57:20.111367       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:20.111695       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:20.111775       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:30.124283       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:30.124477       1 main.go:227] handling current node
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:30.124495       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:30.124505       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:30.125279       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:30.125393       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:40.137523       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:40.137766       1 main.go:227] handling current node
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:40.137807       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:40.137833       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:40.137998       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:40.138087       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:50.149548       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:50.149697       1 main.go:227] handling current node
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:50.149712       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:50.149720       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:50.150251       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:57:50.150344       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:00.159094       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:00.159284       1 main.go:227] handling current node
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:00.159340       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:00.159700       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:00.160303       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:00.160346       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:10.177603       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:10.177780       1 main.go:227] handling current node
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:10.178122       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:10.178166       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:10.178455       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:10.178497       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:20.196110       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:20.196144       1 main.go:227] handling current node
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:20.196236       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:20.196542       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:20.196774       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.680729    2404 command_runner.go:130] ! I0318 12:58:20.196867       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.681250    2404 command_runner.go:130] ! I0318 12:58:30.204485       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.681250    2404 command_runner.go:130] ! I0318 12:58:30.204515       1 main.go:227] handling current node
	I0318 13:11:00.681250    2404 command_runner.go:130] ! I0318 12:58:30.204528       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.681307    2404 command_runner.go:130] ! I0318 12:58:30.204556       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.681307    2404 command_runner.go:130] ! I0318 12:58:30.204856       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.681307    2404 command_runner.go:130] ! I0318 12:58:30.205022       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.681307    2404 command_runner.go:130] ! I0318 12:58:40.221076       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.681307    2404 command_runner.go:130] ! I0318 12:58:40.221184       1 main.go:227] handling current node
	I0318 13:11:00.681307    2404 command_runner.go:130] ! I0318 12:58:40.221201       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.681403    2404 command_runner.go:130] ! I0318 12:58:40.221210       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.681403    2404 command_runner.go:130] ! I0318 12:58:40.221741       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.681403    2404 command_runner.go:130] ! I0318 12:58:40.221769       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.681403    2404 command_runner.go:130] ! I0318 12:58:50.229210       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.681403    2404 command_runner.go:130] ! I0318 12:58:50.229302       1 main.go:227] handling current node
	I0318 13:11:00.681454    2404 command_runner.go:130] ! I0318 12:58:50.229317       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.681454    2404 command_runner.go:130] ! I0318 12:58:50.229324       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:58:50.229703       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:58:50.229807       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:00.244905       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:00.244992       1 main.go:227] handling current node
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:00.245007       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:00.245033       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:00.245480       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:00.245600       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:10.253460       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:10.253563       1 main.go:227] handling current node
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:10.253579       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:10.253605       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:10.254199       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:10.254310       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:20.270774       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:20.270870       1 main.go:227] handling current node
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:20.270886       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:20.270894       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:20.271275       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:20.271367       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:30.281784       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:30.281809       1 main.go:227] handling current node
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:30.281819       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:30.281824       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:30.282361       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:30.282392       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:40.291176       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:40.291304       1 main.go:227] handling current node
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:40.291321       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:40.291328       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:40.291827       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:40.291857       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:50.303374       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:50.303454       1 main.go:227] handling current node
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:50.303468       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:50.303476       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:50.303974       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 12:59:50.304002       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 13:00:00.311317       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 13:00:00.311423       1 main.go:227] handling current node
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 13:00:00.311441       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 13:00:00.311449       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.681483    2404 command_runner.go:130] ! I0318 13:00:00.312039       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.682012    2404 command_runner.go:130] ! I0318 13:00:00.312135       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.682012    2404 command_runner.go:130] ! I0318 13:00:10.324823       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.682012    2404 command_runner.go:130] ! I0318 13:00:10.324902       1 main.go:227] handling current node
	I0318 13:11:00.682057    2404 command_runner.go:130] ! I0318 13:00:10.324915       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.682057    2404 command_runner.go:130] ! I0318 13:00:10.324926       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.682057    2404 command_runner.go:130] ! I0318 13:00:10.325084       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.682057    2404 command_runner.go:130] ! I0318 13:00:10.325108       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.682111    2404 command_runner.go:130] ! I0318 13:00:20.338195       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.682111    2404 command_runner.go:130] ! I0318 13:00:20.338297       1 main.go:227] handling current node
	I0318 13:11:00.682153    2404 command_runner.go:130] ! I0318 13:00:20.338312       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.682174    2404 command_runner.go:130] ! I0318 13:00:20.338320       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.682197    2404 command_runner.go:130] ! I0318 13:00:20.338525       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.682197    2404 command_runner.go:130] ! I0318 13:00:20.338601       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.682224    2404 command_runner.go:130] ! I0318 13:00:30.345095       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.682224    2404 command_runner.go:130] ! I0318 13:00:30.345184       1 main.go:227] handling current node
	I0318 13:11:00.682258    2404 command_runner.go:130] ! I0318 13:00:30.345198       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.682258    2404 command_runner.go:130] ! I0318 13:00:30.345205       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.682328    2404 command_runner.go:130] ! I0318 13:00:30.346074       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.682328    2404 command_runner.go:130] ! I0318 13:00:30.346194       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:40.357007       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:40.357386       1 main.go:227] handling current node
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:40.357485       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:40.357513       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:40.357737       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:40.357766       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:50.372182       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:50.372221       1 main.go:227] handling current node
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:50.372235       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:50.372242       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:50.372608       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:00:50.372772       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:00.386990       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:00.387036       1 main.go:227] handling current node
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:00.387050       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:00.387058       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:00.387182       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:00.387191       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:10.396889       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:10.396930       1 main.go:227] handling current node
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:10.396942       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:10.396948       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:10.397250       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:10.397343       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:20.413272       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:20.413371       1 main.go:227] handling current node
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:20.413386       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:20.413395       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:20.413968       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:20.413999       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:30.429160       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:30.429478       1 main.go:227] handling current node
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:30.429549       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:30.429678       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:30.429960       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.682366    2404 command_runner.go:130] ! I0318 13:01:30.430034       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.682932    2404 command_runner.go:130] ! I0318 13:01:40.436733       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.682932    2404 command_runner.go:130] ! I0318 13:01:40.436839       1 main.go:227] handling current node
	I0318 13:11:00.682932    2404 command_runner.go:130] ! I0318 13:01:40.436901       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.682932    2404 command_runner.go:130] ! I0318 13:01:40.436930       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.682932    2404 command_runner.go:130] ! I0318 13:01:40.437399       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683000    2404 command_runner.go:130] ! I0318 13:01:40.437431       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683000    2404 command_runner.go:130] ! I0318 13:01:50.451622       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683000    2404 command_runner.go:130] ! I0318 13:01:50.451802       1 main.go:227] handling current node
	I0318 13:11:00.683000    2404 command_runner.go:130] ! I0318 13:01:50.451849       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683000    2404 command_runner.go:130] ! I0318 13:01:50.451860       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.683000    2404 command_runner.go:130] ! I0318 13:01:50.452021       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683095    2404 command_runner.go:130] ! I0318 13:01:50.452171       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683095    2404 command_runner.go:130] ! I0318 13:02:00.460452       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683095    2404 command_runner.go:130] ! I0318 13:02:00.460548       1 main.go:227] handling current node
	I0318 13:11:00.683095    2404 command_runner.go:130] ! I0318 13:02:00.460563       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683095    2404 command_runner.go:130] ! I0318 13:02:00.460571       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.683095    2404 command_runner.go:130] ! I0318 13:02:00.461181       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683161    2404 command_runner.go:130] ! I0318 13:02:00.461333       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683183    2404 command_runner.go:130] ! I0318 13:02:10.474274       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683227    2404 command_runner.go:130] ! I0318 13:02:10.474396       1 main.go:227] handling current node
	I0318 13:11:00.683254    2404 command_runner.go:130] ! I0318 13:02:10.474427       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683254    2404 command_runner.go:130] ! I0318 13:02:10.474436       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:10.475019       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:10.475159       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:20.489442       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:20.489616       1 main.go:227] handling current node
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:20.489699       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:20.489752       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:20.490046       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:20.490082       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:30.497474       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:30.497574       1 main.go:227] handling current node
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:30.497589       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:30.497597       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:30.498279       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:30.498361       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:40.512026       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:40.512345       1 main.go:227] handling current node
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:40.512385       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:40.512477       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:40.512786       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:40.512873       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:50.520110       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:50.520239       1 main.go:227] handling current node
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:50.520254       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:50.520263       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:50.520784       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:02:50.520861       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:00.531866       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:00.531958       1 main.go:227] handling current node
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:00.531972       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:00.531979       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:00.532211       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:00.532293       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:10.543869       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:10.543913       1 main.go:227] handling current node
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:10.543926       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:10.543933       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:10.544294       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683280    2404 command_runner.go:130] ! I0318 13:03:10.544430       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683802    2404 command_runner.go:130] ! I0318 13:03:20.558742       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683802    2404 command_runner.go:130] ! I0318 13:03:20.558782       1 main.go:227] handling current node
	I0318 13:11:00.683802    2404 command_runner.go:130] ! I0318 13:03:20.558795       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683802    2404 command_runner.go:130] ! I0318 13:03:20.558802       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.683867    2404 command_runner.go:130] ! I0318 13:03:20.558992       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.683965    2404 command_runner.go:130] ! I0318 13:03:20.559009       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.683965    2404 command_runner.go:130] ! I0318 13:03:30.568771       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.683965    2404 command_runner.go:130] ! I0318 13:03:30.568872       1 main.go:227] handling current node
	I0318 13:11:00.683965    2404 command_runner.go:130] ! I0318 13:03:30.568905       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.683965    2404 command_runner.go:130] ! I0318 13:03:30.568996       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684033    2404 command_runner.go:130] ! I0318 13:03:30.569367       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684057    2404 command_runner.go:130] ! I0318 13:03:30.569450       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684057    2404 command_runner.go:130] ! I0318 13:03:40.587554       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684057    2404 command_runner.go:130] ! I0318 13:03:40.587674       1 main.go:227] handling current node
	I0318 13:11:00.684094    2404 command_runner.go:130] ! I0318 13:03:40.588337       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.684094    2404 command_runner.go:130] ! I0318 13:03:40.588356       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684121    2404 command_runner.go:130] ! I0318 13:03:40.588758       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684121    2404 command_runner.go:130] ! I0318 13:03:40.588836       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684121    2404 command_runner.go:130] ! I0318 13:03:50.596331       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684121    2404 command_runner.go:130] ! I0318 13:03:50.596438       1 main.go:227] handling current node
	I0318 13:11:00.684121    2404 command_runner.go:130] ! I0318 13:03:50.596453       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.684181    2404 command_runner.go:130] ! I0318 13:03:50.596462       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684258    2404 command_runner.go:130] ! I0318 13:03:50.596942       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684315    2404 command_runner.go:130] ! I0318 13:03:50.597079       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684315    2404 command_runner.go:130] ! I0318 13:04:00.611242       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684315    2404 command_runner.go:130] ! I0318 13:04:00.611383       1 main.go:227] handling current node
	I0318 13:11:00.684359    2404 command_runner.go:130] ! I0318 13:04:00.611397       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.684359    2404 command_runner.go:130] ! I0318 13:04:00.611405       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684359    2404 command_runner.go:130] ! I0318 13:04:00.611541       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684359    2404 command_runner.go:130] ! I0318 13:04:00.611572       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684418    2404 command_runner.go:130] ! I0318 13:04:10.624814       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684418    2404 command_runner.go:130] ! I0318 13:04:10.624904       1 main.go:227] handling current node
	I0318 13:11:00.684441    2404 command_runner.go:130] ! I0318 13:04:10.624920       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:10.624927       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:10.625504       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:10.625547       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:20.640319       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:20.640364       1 main.go:227] handling current node
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:20.640379       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:20.640386       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:20.640865       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:20.640901       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:30.648021       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:30.648134       1 main.go:227] handling current node
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:30.648148       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:30.648156       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:30.648313       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:30.648344       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:40.663577       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:40.663749       1 main.go:227] handling current node
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:40.663765       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:40.663774       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:40.663896       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:40.663929       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:50.669717       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:50.669791       1 main.go:227] handling current node
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:50.669805       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:50.669812       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:50.670128       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:04:50.670230       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:00.686596       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:00.686809       1 main.go:227] handling current node
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:00.686942       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:00.687116       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:00.687370       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:00.687441       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:10.704297       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:10.704404       1 main.go:227] handling current node
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:10.704426       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:10.704555       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:10.704810       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:10.704878       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:20.722958       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:20.723127       1 main.go:227] handling current node
	I0318 13:11:00.684468    2404 command_runner.go:130] ! I0318 13:05:20.723145       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685009    2404 command_runner.go:130] ! I0318 13:05:20.723159       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685065    2404 command_runner.go:130] ! I0318 13:05:30.731764       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685065    2404 command_runner.go:130] ! I0318 13:05:30.731841       1 main.go:227] handling current node
	I0318 13:11:00.685065    2404 command_runner.go:130] ! I0318 13:05:30.731854       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685109    2404 command_runner.go:130] ! I0318 13:05:30.731861       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685109    2404 command_runner.go:130] ! I0318 13:05:30.732029       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685174    2404 command_runner.go:130] ! I0318 13:05:30.732163       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685174    2404 command_runner.go:130] ! I0318 13:05:30.732544       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.30.137.140 Flags: [] Table: 0} 
	I0318 13:11:00.685174    2404 command_runner.go:130] ! I0318 13:05:40.739849       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685233    2404 command_runner.go:130] ! I0318 13:05:40.739939       1 main.go:227] handling current node
	I0318 13:11:00.685233    2404 command_runner.go:130] ! I0318 13:05:40.739953       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685255    2404 command_runner.go:130] ! I0318 13:05:40.739960       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685255    2404 command_runner.go:130] ! I0318 13:05:40.740081       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:05:40.740151       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:05:50.748036       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:05:50.748465       1 main.go:227] handling current node
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:05:50.748942       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:05:50.749055       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:05:50.749287       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:05:50.749413       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:00.757350       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:00.757434       1 main.go:227] handling current node
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:00.757452       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:00.757460       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:00.757853       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:00.758194       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:10.766768       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:10.766886       1 main.go:227] handling current node
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:10.766901       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:10.766910       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:10.767143       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:10.767175       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:20.773530       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:20.773656       1 main.go:227] handling current node
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:20.773729       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:20.773741       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:20.774155       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:20.774478       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:30.792219       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:30.792349       1 main.go:227] handling current node
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:30.792364       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:30.792373       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:30.792864       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:30.792901       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:40.809219       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:40.809451       1 main.go:227] handling current node
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:40.809484       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685281    2404 command_runner.go:130] ! I0318 13:06:40.809508       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685877    2404 command_runner.go:130] ! I0318 13:06:40.809841       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685877    2404 command_runner.go:130] ! I0318 13:06:40.810075       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685877    2404 command_runner.go:130] ! I0318 13:06:50.822556       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:06:50.822612       1 main.go:227] handling current node
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:06:50.822667       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:06:50.822680       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:06:50.822925       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:06:50.823171       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:00.837923       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:00.838008       1 main.go:227] handling current node
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:00.838022       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:00.838030       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:00.838429       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:00.838666       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:10.854207       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:10.854411       1 main.go:227] handling current node
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:10.854444       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:10.854469       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:10.854879       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:10.855094       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:20.861534       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:20.861671       1 main.go:227] handling current node
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:20.861685       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:20.861692       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:20.861818       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:00.685977    2404 command_runner.go:130] ! I0318 13:07:20.861845       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:00.707088    2404 logs.go:123] Gathering logs for container status ...
	I0318 13:11:00.707088    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:11:00.800084    2404 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0318 13:11:00.800257    2404 command_runner.go:130] > c5d2074be239f       8c811b4aec35f                                                                                         7 seconds ago        Running             busybox                   1                   e20878b8092c2       busybox-5b5d89c9d6-c2997
	I0318 13:11:00.800257    2404 command_runner.go:130] > 3c3bc988c74cd       ead0a4a53df89                                                                                         7 seconds ago        Running             coredns                   1                   97583cc14f115       coredns-5dd5756b68-456tm
	I0318 13:11:00.800257    2404 command_runner.go:130] > eadcf41dad509       6e38f40d628db                                                                                         25 seconds ago       Running             storage-provisioner       2                   41035eff3b7db       storage-provisioner
	I0318 13:11:00.800257    2404 command_runner.go:130] > c8e5ec25e910e       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   86d74dec812cf       kindnet-hhsxh
	I0318 13:11:00.800369    2404 command_runner.go:130] > 46c0cf90d385f       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   41035eff3b7db       storage-provisioner
	I0318 13:11:00.800369    2404 command_runner.go:130] > 163ccabc3882a       83f6cc407eed8                                                                                         About a minute ago   Running             kube-proxy                1                   a9f21749669fe       kube-proxy-mc5tv
	I0318 13:11:00.800369    2404 command_runner.go:130] > 5f0887d1e6913       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      0                   354f3c44a34fc       etcd-multinode-894400
	I0318 13:11:00.800369    2404 command_runner.go:130] > 66ee8be9fada7       e3db313c6dbc0                                                                                         About a minute ago   Running             kube-scheduler            1                   6fb3325d3c100       kube-scheduler-multinode-894400
	I0318 13:11:00.800369    2404 command_runner.go:130] > fc4430c7fa204       7fe0e6f37db33                                                                                         About a minute ago   Running             kube-apiserver            0                   bc7236a19957e       kube-apiserver-multinode-894400
	I0318 13:11:00.800369    2404 command_runner.go:130] > 4ad6784a187d6       d058aa5ab969c                                                                                         About a minute ago   Running             kube-controller-manager   1                   066206d4c52cb       kube-controller-manager-multinode-894400
	I0318 13:11:00.800637    2404 command_runner.go:130] > dd031b5cb1e85       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   a23c1189be7c3       busybox-5b5d89c9d6-c2997
	I0318 13:11:00.800637    2404 command_runner.go:130] > 693a64f7472fd       ead0a4a53df89                                                                                         23 minutes ago       Exited              coredns                   0                   d001e299e996b       coredns-5dd5756b68-456tm
	I0318 13:11:00.800637    2404 command_runner.go:130] > c4d7018ad23a7       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              23 minutes ago       Exited              kindnet-cni               0                   a47b1fb60692c       kindnet-hhsxh
	I0318 13:11:00.800720    2404 command_runner.go:130] > 9335855aab63d       83f6cc407eed8                                                                                         23 minutes ago       Exited              kube-proxy                0                   60e9cd749c8f6       kube-proxy-mc5tv
	I0318 13:11:00.800720    2404 command_runner.go:130] > e4d42739ce0e9       e3db313c6dbc0                                                                                         23 minutes ago       Exited              kube-scheduler            0                   82710777e700c       kube-scheduler-multinode-894400
	I0318 13:11:00.800720    2404 command_runner.go:130] > 7aa5cf4ec378e       d058aa5ab969c                                                                                         23 minutes ago       Exited              kube-controller-manager   0                   5485f509825d9       kube-controller-manager-multinode-894400
	I0318 13:11:00.803007    2404 logs.go:123] Gathering logs for dmesg ...
	I0318 13:11:00.803175    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:11:00.828169    2404 command_runner.go:130] > [Mar18 13:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0318 13:11:00.828169    2404 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0318 13:11:00.828169    2404 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0318 13:11:00.828169    2404 command_runner.go:130] > [  +0.127438] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0318 13:11:00.828169    2404 command_runner.go:130] > [  +0.022457] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0318 13:11:00.828169    2404 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0318 13:11:00.828169    2404 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0318 13:11:00.828169    2404 command_runner.go:130] > [  +0.054196] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0318 13:11:00.828169    2404 command_runner.go:130] > [  +0.018424] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0318 13:11:00.828169    2404 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0318 13:11:00.828169    2404 command_runner.go:130] > [  +4.800453] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0318 13:11:00.828169    2404 command_runner.go:130] > [  +1.267636] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0318 13:11:00.829151    2404 command_runner.go:130] > [  +1.056053] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	I0318 13:11:00.829151    2404 command_runner.go:130] > [  +6.778211] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0318 13:11:00.829151    2404 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0318 13:11:00.829194    2404 command_runner.go:130] > [Mar18 13:09] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.160643] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [ +25.236158] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.093711] kauditd_printk_skb: 73 callbacks suppressed
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.488652] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.198307] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.213157] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +2.866452] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.191537] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.163904] systemd-fstab-generator[1255]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.280650] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.822319] systemd-fstab-generator[1393]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +0.094744] kauditd_printk_skb: 205 callbacks suppressed
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +3.177820] systemd-fstab-generator[1525]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +1.898187] kauditd_printk_skb: 64 callbacks suppressed
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +5.227041] kauditd_printk_skb: 10 callbacks suppressed
	I0318 13:11:00.829194    2404 command_runner.go:130] > [  +4.065141] systemd-fstab-generator[3089]: Ignoring "noauto" option for root device
	I0318 13:11:00.829194    2404 command_runner.go:130] > [Mar18 13:10] kauditd_printk_skb: 70 callbacks suppressed
	I0318 13:11:00.830941    2404 logs.go:123] Gathering logs for kube-apiserver [fc4430c7fa20] ...
	I0318 13:11:00.830941    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc4430c7fa20"
	I0318 13:11:00.857820    2404 command_runner.go:130] ! I0318 13:09:45.117348       1 options.go:220] external host was not specified, using 172.30.130.156
	I0318 13:11:00.858209    2404 command_runner.go:130] ! I0318 13:09:45.120803       1 server.go:148] Version: v1.28.4
	I0318 13:11:00.858209    2404 command_runner.go:130] ! I0318 13:09:45.120988       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:00.858209    2404 command_runner.go:130] ! I0318 13:09:45.770080       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0318 13:11:00.858296    2404 command_runner.go:130] ! I0318 13:09:45.795010       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0318 13:11:00.858376    2404 command_runner.go:130] ! I0318 13:09:45.795318       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0318 13:11:00.858376    2404 command_runner.go:130] ! I0318 13:09:45.795878       1 instance.go:298] Using reconciler: lease
	I0318 13:11:00.858376    2404 command_runner.go:130] ! I0318 13:09:46.836486       1 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0318 13:11:00.858451    2404 command_runner.go:130] ! W0318 13:09:46.836605       1 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858480    2404 command_runner.go:130] ! I0318 13:09:47.074638       1 handler.go:232] Adding GroupVersion  v1 to ResourceManager
	I0318 13:11:00.858510    2404 command_runner.go:130] ! I0318 13:09:47.074978       1 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0318 13:11:00.858573    2404 command_runner.go:130] ! I0318 13:09:47.452713       1 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
	I0318 13:11:00.858632    2404 command_runner.go:130] ! I0318 13:09:47.465860       1 handler.go:232] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0318 13:11:00.858657    2404 command_runner.go:130] ! W0318 13:09:47.465973       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858686    2404 command_runner.go:130] ! W0318 13:09:47.465981       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:00.858749    2404 command_runner.go:130] ! I0318 13:09:47.466706       1 handler.go:232] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0318 13:11:00.858749    2404 command_runner.go:130] ! W0318 13:09:47.466787       1 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858820    2404 command_runner.go:130] ! I0318 13:09:47.467862       1 handler.go:232] Adding GroupVersion autoscaling v2 to ResourceManager
	I0318 13:11:00.858820    2404 command_runner.go:130] ! I0318 13:09:47.468840       1 handler.go:232] Adding GroupVersion autoscaling v1 to ResourceManager
	I0318 13:11:00.858852    2404 command_runner.go:130] ! W0318 13:09:47.468926       1 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.468934       1 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.470928       1 handler.go:232] Adding GroupVersion batch v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.471074       1 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.472121       1 handler.go:232] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.472195       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.472202       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.472773       1 handler.go:232] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.472852       1 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.472898       1 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.473727       1 handler.go:232] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.476475       1 handler.go:232] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.476612       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.476620       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.477234       1 handler.go:232] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.477314       1 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.477321       1 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.478143       1 handler.go:232] Adding GroupVersion policy v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.478217       1 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.480195       1 handler.go:232] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.480271       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.480279       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.480731       1 handler.go:232] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.480812       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.480819       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.493837       1 handler.go:232] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.494098       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.494198       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.499689       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.506631       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.506664       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.506671       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.512288       1 handler.go:232] Adding GroupVersion apps v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.512371       1 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.512378       1 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.513443       1 handler.go:232] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.513547       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.513557       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:00.858880    2404 command_runner.go:130] ! I0318 13:09:47.514339       1 handler.go:232] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0318 13:11:00.858880    2404 command_runner.go:130] ! W0318 13:09:47.514435       1 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.859506    2404 command_runner.go:130] ! I0318 13:09:47.536002       1 handler.go:232] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0318 13:11:00.859506    2404 command_runner.go:130] ! W0318 13:09:47.536061       1 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:00.859506    2404 command_runner.go:130] ! I0318 13:09:48.221475       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:11:00.859506    2404 command_runner.go:130] ! I0318 13:09:48.221960       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:11:00.859506    2404 command_runner.go:130] ! I0318 13:09:48.222438       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0318 13:11:00.859506    2404 command_runner.go:130] ! I0318 13:09:48.222942       1 secure_serving.go:213] Serving securely on [::]:8443
	I0318 13:11:00.859506    2404 command_runner.go:130] ! I0318 13:09:48.223022       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:11:00.859670    2404 command_runner.go:130] ! I0318 13:09:48.223440       1 controller.go:78] Starting OpenAPI AggregationController
	I0318 13:11:00.859670    2404 command_runner.go:130] ! I0318 13:09:48.224862       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 13:11:00.859670    2404 command_runner.go:130] ! I0318 13:09:48.225271       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0318 13:11:00.859670    2404 command_runner.go:130] ! I0318 13:09:48.225417       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0318 13:11:00.859670    2404 command_runner.go:130] ! I0318 13:09:48.225564       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0318 13:11:00.859756    2404 command_runner.go:130] ! I0318 13:09:48.228940       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 13:11:00.859756    2404 command_runner.go:130] ! I0318 13:09:48.229462       1 controller.go:116] Starting legacy_token_tracking_controller
	I0318 13:11:00.859756    2404 command_runner.go:130] ! I0318 13:09:48.229644       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0318 13:11:00.859756    2404 command_runner.go:130] ! I0318 13:09:48.230522       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0318 13:11:00.859756    2404 command_runner.go:130] ! I0318 13:09:48.230832       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0318 13:11:00.859836    2404 command_runner.go:130] ! I0318 13:09:48.231097       1 aggregator.go:164] waiting for initial CRD sync...
	I0318 13:11:00.859836    2404 command_runner.go:130] ! I0318 13:09:48.231395       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0318 13:11:00.859836    2404 command_runner.go:130] ! I0318 13:09:48.231642       1 available_controller.go:423] Starting AvailableConditionController
	I0318 13:11:00.859902    2404 command_runner.go:130] ! I0318 13:09:48.231846       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0318 13:11:00.859902    2404 command_runner.go:130] ! I0318 13:09:48.232024       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0318 13:11:00.859959    2404 command_runner.go:130] ! I0318 13:09:48.232223       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0318 13:11:00.859982    2404 command_runner.go:130] ! I0318 13:09:48.232638       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0318 13:11:00.860068    2404 command_runner.go:130] ! I0318 13:09:48.233228       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:11:00.860068    2404 command_runner.go:130] ! I0318 13:09:48.233501       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:11:00.860131    2404 command_runner.go:130] ! I0318 13:09:48.242598       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0318 13:11:00.860154    2404 command_runner.go:130] ! I0318 13:09:48.242850       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0318 13:11:00.860181    2404 command_runner.go:130] ! I0318 13:09:48.243085       1 controller.go:134] Starting OpenAPI controller
	I0318 13:11:00.860237    2404 command_runner.go:130] ! I0318 13:09:48.243289       1 controller.go:85] Starting OpenAPI V3 controller
	I0318 13:11:00.860237    2404 command_runner.go:130] ! I0318 13:09:48.243558       1 naming_controller.go:291] Starting NamingConditionController
	I0318 13:11:00.860315    2404 command_runner.go:130] ! I0318 13:09:48.243852       1 establishing_controller.go:76] Starting EstablishingController
	I0318 13:11:00.860341    2404 command_runner.go:130] ! I0318 13:09:48.244899       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0318 13:11:00.860398    2404 command_runner.go:130] ! I0318 13:09:48.245178       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 13:11:00.860398    2404 command_runner.go:130] ! I0318 13:09:48.245796       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 13:11:00.860398    2404 command_runner.go:130] ! I0318 13:09:48.231958       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0318 13:11:00.860398    2404 command_runner.go:130] ! I0318 13:09:48.403749       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.426183       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.426213       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.426382       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.432175       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.433073       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.433297       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.444484       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.444708       1 aggregator.go:166] initial CRD sync complete...
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.444961       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.445263       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.446443       1 cache.go:39] Caches are synced for autoregister controller
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:48.471536       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:49.257477       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 13:11:00.860502    2404 command_runner.go:130] ! W0318 13:09:49.806994       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.30.130.156]
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:49.809655       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:49.821460       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:51.622752       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:51.799195       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:51.812022       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:51.930541       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 13:11:00.860502    2404 command_runner.go:130] ! I0318 13:09:51.942099       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 13:11:00.868261    2404 logs.go:123] Gathering logs for kube-scheduler [e4d42739ce0e] ...
	I0318 13:11:00.868261    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4d42739ce0e"
	I0318 13:11:00.894583    2404 command_runner.go:130] ! I0318 12:47:23.427784       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:11:00.894583    2404 command_runner.go:130] ! W0318 12:47:24.381993       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 13:11:00.895787    2404 command_runner.go:130] ! W0318 12:47:24.382186       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:00.895866    2404 command_runner.go:130] ! W0318 12:47:24.382237       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 13:11:00.895905    2404 command_runner.go:130] ! W0318 12:47:24.382251       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:11:00.895905    2404 command_runner.go:130] ! I0318 12:47:24.461225       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 13:11:00.895905    2404 command_runner.go:130] ! I0318 12:47:24.461511       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:00.895954    2404 command_runner.go:130] ! I0318 12:47:24.465946       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 13:11:00.895994    2404 command_runner.go:130] ! I0318 12:47:24.466246       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:11:00.896313    2404 command_runner.go:130] ! I0318 12:47:24.466280       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:11:00.896536    2404 command_runner.go:130] ! I0318 12:47:24.473793       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:11:00.897482    2404 command_runner.go:130] ! W0318 12:47:24.487135       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:00.897511    2404 command_runner.go:130] ! E0318 12:47:24.487240       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:00.897511    2404 command_runner.go:130] ! W0318 12:47:24.519325       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:11:00.897511    2404 command_runner.go:130] ! E0318 12:47:24.519853       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:11:00.897511    2404 command_runner.go:130] ! W0318 12:47:24.520361       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:11:00.897511    2404 command_runner.go:130] ! E0318 12:47:24.520484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:11:00.898059    2404 command_runner.go:130] ! W0318 12:47:24.520711       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:11:00.898059    2404 command_runner.go:130] ! E0318 12:47:24.522735       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! W0318 12:47:24.523312       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! E0318 12:47:24.523462       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! W0318 12:47:24.523710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! E0318 12:47:24.523900       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! W0318 12:47:24.524226       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! E0318 12:47:24.524422       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! W0318 12:47:24.524710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! E0318 12:47:24.525125       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! W0318 12:47:24.525523       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! E0318 12:47:24.525746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! W0318 12:47:24.526240       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! E0318 12:47:24.526443       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:11:00.898107    2404 command_runner.go:130] ! W0318 12:47:24.526703       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 13:11:00.898685    2404 command_runner.go:130] ! E0318 12:47:24.526852       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 13:11:00.898797    2404 command_runner.go:130] ! W0318 12:47:24.527382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:11:00.898919    2404 command_runner.go:130] ! E0318 12:47:24.527873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:11:00.898919    2404 command_runner.go:130] ! W0318 12:47:24.528117       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:11:00.898919    2404 command_runner.go:130] ! E0318 12:47:24.528748       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:11:00.898978    2404 command_runner.go:130] ! W0318 12:47:24.529179       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.899120    2404 command_runner.go:130] ! E0318 12:47:24.529832       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.899120    2404 command_runner.go:130] ! W0318 12:47:24.530406       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.899187    2404 command_runner.go:130] ! E0318 12:47:24.532696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.899242    2404 command_runner.go:130] ! W0318 12:47:25.371082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.899262    2404 command_runner.go:130] ! E0318 12:47:25.371130       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.899262    2404 command_runner.go:130] ! W0318 12:47:25.374605       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:11:00.899345    2404 command_runner.go:130] ! E0318 12:47:25.374678       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:11:00.899426    2404 command_runner.go:130] ! W0318 12:47:25.400777       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:11:00.899469    2404 command_runner.go:130] ! E0318 12:47:25.400820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! W0318 12:47:25.434442       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:00.899519    2404 command_runner.go:130] ! E0318 12:47:25.434526       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:00.899519    2404 command_runner.go:130] ! W0318 12:47:25.456878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! E0318 12:47:25.457121       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! W0318 12:47:25.744652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! E0318 12:47:25.744733       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! W0318 12:47:25.777073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! E0318 12:47:25.777145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! W0318 12:47:25.850949       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! E0318 12:47:25.850985       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! W0318 12:47:25.876908       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! E0318 12:47:25.877170       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! W0318 12:47:25.892072       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! E0318 12:47:25.892099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! W0318 12:47:25.988864       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:11:00.899519    2404 command_runner.go:130] ! E0318 12:47:25.988912       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:11:00.900051    2404 command_runner.go:130] ! W0318 12:47:26.044749       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:11:00.900051    2404 command_runner.go:130] ! E0318 12:47:26.044834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:11:00.900127    2404 command_runner.go:130] ! W0318 12:47:26.067659       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.900127    2404 command_runner.go:130] ! E0318 12:47:26.068250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:00.900127    2404 command_runner.go:130] ! I0318 12:47:28.178584       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:11:00.900127    2404 command_runner.go:130] ! I0318 13:07:24.107367       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0318 13:11:00.900215    2404 command_runner.go:130] ! I0318 13:07:24.107975       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0318 13:11:00.900239    2404 command_runner.go:130] ! E0318 13:07:24.108193       1 run.go:74] "command failed" err="finished without leader elect"
	I0318 13:11:00.910075    2404 logs.go:123] Gathering logs for kube-proxy [163ccabc3882] ...
	I0318 13:11:00.910075    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 163ccabc3882"
	I0318 13:11:00.938523    2404 command_runner.go:130] ! I0318 13:09:50.786718       1 server_others.go:69] "Using iptables proxy"
	I0318 13:11:00.938570    2404 command_runner.go:130] ! I0318 13:09:50.833991       1 node.go:141] Successfully retrieved node IP: 172.30.130.156
	I0318 13:11:00.938570    2404 command_runner.go:130] ! I0318 13:09:50.913665       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:11:00.938570    2404 command_runner.go:130] ! I0318 13:09:50.913704       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:11:00.938570    2404 command_runner.go:130] ! I0318 13:09:50.924640       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:11:00.938747    2404 command_runner.go:130] ! I0318 13:09:50.925588       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:11:00.939527    2404 command_runner.go:130] ! I0318 13:09:50.926722       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:11:00.939527    2404 command_runner.go:130] ! I0318 13:09:50.926981       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:00.939588    2404 command_runner.go:130] ! I0318 13:09:50.938764       1 config.go:188] "Starting service config controller"
	I0318 13:11:00.939588    2404 command_runner.go:130] ! I0318 13:09:50.949206       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:11:00.939588    2404 command_runner.go:130] ! I0318 13:09:50.949220       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:11:00.939647    2404 command_runner.go:130] ! I0318 13:09:50.953299       1 config.go:315] "Starting node config controller"
	I0318 13:11:00.939647    2404 command_runner.go:130] ! I0318 13:09:50.979020       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:11:00.939647    2404 command_runner.go:130] ! I0318 13:09:50.990249       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:11:00.939647    2404 command_runner.go:130] ! I0318 13:09:50.958488       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:11:00.939707    2404 command_runner.go:130] ! I0318 13:09:50.996356       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:11:00.939749    2404 command_runner.go:130] ! I0318 13:09:51.051947       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:11:00.942342    2404 logs.go:123] Gathering logs for kube-controller-manager [4ad6784a187d] ...
	I0318 13:11:00.942421    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ad6784a187d"
	I0318 13:11:00.967283    2404 command_runner.go:130] ! I0318 13:09:46.053304       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:11:00.967591    2404 command_runner.go:130] ! I0318 13:09:46.598188       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 13:11:00.967591    2404 command_runner.go:130] ! I0318 13:09:46.598275       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:00.967591    2404 command_runner.go:130] ! I0318 13:09:46.600550       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:11:00.967591    2404 command_runner.go:130] ! I0318 13:09:46.600856       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:11:00.967758    2404 command_runner.go:130] ! I0318 13:09:46.601228       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 13:11:00.967758    2404 command_runner.go:130] ! I0318 13:09:46.601416       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:11:00.967758    2404 command_runner.go:130] ! I0318 13:09:50.365580       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 13:11:00.967758    2404 command_runner.go:130] ! I0318 13:09:50.380467       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 13:11:00.967860    2404 command_runner.go:130] ! I0318 13:09:50.380609       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 13:11:00.967860    2404 command_runner.go:130] ! I0318 13:09:50.380622       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 13:11:00.967860    2404 command_runner.go:130] ! I0318 13:09:50.396606       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 13:11:00.967976    2404 command_runner.go:130] ! I0318 13:09:50.396766       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 13:11:00.967976    2404 command_runner.go:130] ! I0318 13:09:50.466364       1 shared_informer.go:318] Caches are synced for tokens
	I0318 13:11:00.968058    2404 command_runner.go:130] ! I0318 13:10:00.425018       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 13:11:00.968058    2404 command_runner.go:130] ! I0318 13:10:00.425185       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 13:11:00.968058    2404 command_runner.go:130] ! I0318 13:10:00.425608       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 13:11:00.968136    2404 command_runner.go:130] ! I0318 13:10:00.425649       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 13:11:00.968136    2404 command_runner.go:130] ! I0318 13:10:00.429368       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 13:11:00.968214    2404 command_runner.go:130] ! I0318 13:10:00.429570       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 13:11:00.968260    2404 command_runner.go:130] ! I0318 13:10:00.429653       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 13:11:00.968313    2404 command_runner.go:130] ! I0318 13:10:00.432615       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 13:11:00.968313    2404 command_runner.go:130] ! I0318 13:10:00.435149       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 13:11:00.968360    2404 command_runner.go:130] ! I0318 13:10:00.435476       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 13:11:00.968400    2404 command_runner.go:130] ! I0318 13:10:00.435957       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 13:11:00.968400    2404 command_runner.go:130] ! I0318 13:10:00.436324       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 13:11:00.968449    2404 command_runner.go:130] ! I0318 13:10:00.436534       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 13:11:00.968449    2404 command_runner.go:130] ! E0318 13:10:00.440226       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 13:11:00.968449    2404 command_runner.go:130] ! I0318 13:10:00.440586       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 13:11:00.968539    2404 command_runner.go:130] ! E0318 13:10:00.443615       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 13:11:00.968539    2404 command_runner.go:130] ! I0318 13:10:00.443912       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 13:11:00.968618    2404 command_runner.go:130] ! I0318 13:10:00.446716       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 13:11:00.968695    2404 command_runner.go:130] ! I0318 13:10:00.446764       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 13:11:00.968695    2404 command_runner.go:130] ! I0318 13:10:00.447388       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 13:11:00.968695    2404 command_runner.go:130] ! I0318 13:10:00.450136       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 13:11:00.968782    2404 command_runner.go:130] ! I0318 13:10:00.450514       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 13:11:00.968782    2404 command_runner.go:130] ! I0318 13:10:00.450816       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 13:11:00.968861    2404 command_runner.go:130] ! I0318 13:10:00.482128       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 13:11:00.968938    2404 command_runner.go:130] ! I0318 13:10:00.482431       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 13:11:00.968938    2404 command_runner.go:130] ! I0318 13:10:00.482564       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 13:11:00.969017    2404 command_runner.go:130] ! I0318 13:10:00.485138       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 13:11:00.969017    2404 command_runner.go:130] ! I0318 13:10:00.485477       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 13:11:00.969017    2404 command_runner.go:130] ! I0318 13:10:00.485637       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 13:11:00.969096    2404 command_runner.go:130] ! I0318 13:10:00.485765       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 13:11:00.969096    2404 command_runner.go:130] ! I0318 13:10:00.487736       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 13:11:00.969173    2404 command_runner.go:130] ! I0318 13:10:00.488836       1 gc_controller.go:101] "Starting GC controller"
	I0318 13:11:00.969173    2404 command_runner.go:130] ! I0318 13:10:00.489018       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 13:11:00.969173    2404 command_runner.go:130] ! I0318 13:10:00.490586       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 13:11:00.969251    2404 command_runner.go:130] ! I0318 13:10:00.491164       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 13:11:00.969251    2404 command_runner.go:130] ! I0318 13:10:00.491311       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 13:11:00.969329    2404 command_runner.go:130] ! I0318 13:10:00.494562       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 13:11:00.969329    2404 command_runner.go:130] ! I0318 13:10:00.495002       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 13:11:00.969329    2404 command_runner.go:130] ! I0318 13:10:00.495133       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 13:11:00.969406    2404 command_runner.go:130] ! I0318 13:10:00.497694       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 13:11:00.969406    2404 command_runner.go:130] ! I0318 13:10:00.497986       1 expand_controller.go:328] "Starting expand controller"
	I0318 13:11:00.969497    2404 command_runner.go:130] ! I0318 13:10:00.498025       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 13:11:00.969527    2404 command_runner.go:130] ! I0318 13:10:00.500933       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 13:11:00.969527    2404 command_runner.go:130] ! I0318 13:10:00.502880       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 13:11:00.969605    2404 command_runner.go:130] ! I0318 13:10:00.503102       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 13:11:00.969605    2404 command_runner.go:130] ! I0318 13:10:00.506760       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 13:11:00.969683    2404 command_runner.go:130] ! I0318 13:10:00.507227       1 disruption.go:433] "Sending events to api server."
	I0318 13:11:00.969683    2404 command_runner.go:130] ! I0318 13:10:00.507302       1 disruption.go:444] "Starting disruption controller"
	I0318 13:11:00.969683    2404 command_runner.go:130] ! I0318 13:10:00.507366       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 13:11:00.969761    2404 command_runner.go:130] ! I0318 13:10:00.509815       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 13:11:00.969761    2404 command_runner.go:130] ! I0318 13:10:00.510402       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 13:11:00.969838    2404 command_runner.go:130] ! I0318 13:10:00.510478       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 13:11:00.969838    2404 command_runner.go:130] ! I0318 13:10:00.514582       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 13:11:00.969915    2404 command_runner.go:130] ! I0318 13:10:00.514842       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 13:11:00.969993    2404 command_runner.go:130] ! I0318 13:10:00.514832       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:00.969993    2404 command_runner.go:130] ! I0318 13:10:00.517859       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 13:11:00.970072    2404 command_runner.go:130] ! I0318 13:10:00.518134       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 13:11:00.970072    2404 command_runner.go:130] ! I0318 13:10:00.518434       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:00.970149    2404 command_runner.go:130] ! I0318 13:10:00.519400       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 13:11:00.970149    2404 command_runner.go:130] ! I0318 13:10:00.519576       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 13:11:00.970227    2404 command_runner.go:130] ! I0318 13:10:00.519729       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:00.970338    2404 command_runner.go:130] ! I0318 13:10:00.519883       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 13:11:00.970338    2404 command_runner.go:130] ! I0318 13:10:00.519902       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 13:11:00.970338    2404 command_runner.go:130] ! I0318 13:10:00.520909       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 13:11:00.970422    2404 command_runner.go:130] ! I0318 13:10:00.519914       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:00.970422    2404 command_runner.go:130] ! I0318 13:10:00.524690       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 13:11:00.970422    2404 command_runner.go:130] ! I0318 13:10:00.524967       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 13:11:00.970501    2404 command_runner.go:130] ! I0318 13:10:00.525267       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 13:11:00.970521    2404 command_runner.go:130] ! I0318 13:10:00.528248       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 13:11:00.970521    2404 command_runner.go:130] ! I0318 13:10:00.528509       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 13:11:00.970521    2404 command_runner.go:130] ! I0318 13:10:00.528721       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 13:11:00.970574    2404 command_runner.go:130] ! I0318 13:10:00.532254       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 13:11:00.970592    2404 command_runner.go:130] ! I0318 13:10:00.532687       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 13:11:00.970592    2404 command_runner.go:130] ! I0318 13:10:00.532717       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 13:11:00.970592    2404 command_runner.go:130] ! I0318 13:10:00.544900       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 13:11:00.970647    2404 command_runner.go:130] ! I0318 13:10:00.545135       1 horizontal.go:200] "Starting HPA controller"
	I0318 13:11:00.970647    2404 command_runner.go:130] ! I0318 13:10:00.545195       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 13:11:00.970671    2404 command_runner.go:130] ! I0318 13:10:00.547641       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 13:11:00.970671    2404 command_runner.go:130] ! I0318 13:10:00.548078       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.550784       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.551368       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.551557       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.551931       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.551452       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.553190       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.553856       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.554970       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.555558       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.555718       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.558545       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.558805       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.558956       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 13:11:00.970713    2404 command_runner.go:130] ! W0318 13:10:00.765746       1 shared_informer.go:593] resyncPeriod 13h51m37.636447347s is smaller than resyncCheckPeriod 22h45m34.293132222s and the informer has already started. Changing it to 22h45m34.293132222s
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.765905       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.766015       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.766141       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.766231       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.767946       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.768138       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.768175       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.768271       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.768411       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.768529       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.768565       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.768633       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! W0318 13:10:00.768841       1 shared_informer.go:593] resyncPeriod 17h39m7.901162259s is smaller than resyncCheckPeriod 22h45m34.293132222s and the informer has already started. Changing it to 22h45m34.293132222s
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.769020       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.769077       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.769115       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.769206       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 13:11:00.970713    2404 command_runner.go:130] ! I0318 13:10:00.769280       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 13:11:00.971235    2404 command_runner.go:130] ! I0318 13:10:00.769427       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 13:11:00.971235    2404 command_runner.go:130] ! I0318 13:10:00.769509       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 13:11:00.971235    2404 command_runner.go:130] ! I0318 13:10:00.769668       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 13:11:00.971235    2404 command_runner.go:130] ! I0318 13:10:00.769816       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 13:11:00.971235    2404 command_runner.go:130] ! I0318 13:10:00.769832       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:11:00.971235    2404 command_runner.go:130] ! I0318 13:10:00.769855       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 13:11:00.971235    2404 command_runner.go:130] ! I0318 13:10:00.769714       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 13:11:00.971235    2404 command_runner.go:130] ! I0318 13:10:00.906184       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 13:11:00.971382    2404 command_runner.go:130] ! I0318 13:10:00.906404       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 13:11:00.971382    2404 command_runner.go:130] ! I0318 13:10:00.906702       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:11:00.971382    2404 command_runner.go:130] ! I0318 13:10:00.906740       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 13:11:00.971382    2404 command_runner.go:130] ! I0318 13:10:00.956245       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 13:11:00.971382    2404 command_runner.go:130] ! I0318 13:10:00.956457       1 job_controller.go:226] "Starting job controller"
	I0318 13:11:00.971461    2404 command_runner.go:130] ! I0318 13:10:00.956765       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 13:11:00.971485    2404 command_runner.go:130] ! I0318 13:10:01.056144       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 13:11:00.971512    2404 command_runner.go:130] ! I0318 13:10:01.056251       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 13:11:00.971607    2404 command_runner.go:130] ! I0318 13:10:01.056576       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 13:11:00.971642    2404 command_runner.go:130] ! I0318 13:10:01.156303       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 13:11:00.971642    2404 command_runner.go:130] ! I0318 13:10:01.156762       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 13:11:00.971642    2404 command_runner.go:130] ! I0318 13:10:01.156852       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 13:11:00.971718    2404 command_runner.go:130] ! I0318 13:10:01.205282       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 13:11:00.971718    2404 command_runner.go:130] ! I0318 13:10:01.205353       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 13:11:00.971747    2404 command_runner.go:130] ! I0318 13:10:01.205368       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 13:11:00.971747    2404 command_runner.go:130] ! I0318 13:10:01.256513       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 13:11:00.971747    2404 command_runner.go:130] ! I0318 13:10:01.256828       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 13:11:00.971747    2404 command_runner.go:130] ! I0318 13:10:01.256867       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 13:11:00.971747    2404 command_runner.go:130] ! I0318 13:10:01.306581       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 13:11:00.971817    2404 command_runner.go:130] ! I0318 13:10:01.306969       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.307156       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.317298       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.349149       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.369957       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.371629       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400\" does not exist"
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.371840       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m02\" does not exist"
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.372556       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.372879       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m03\" does not exist"
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.373004       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.380690       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.383858       1 shared_informer.go:318] Caches are synced for namespace
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.390400       1 shared_informer.go:318] Caches are synced for GC
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.391669       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.398208       1 shared_informer.go:318] Caches are synced for expand
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.403691       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.406154       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.407387       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.407463       1 shared_informer.go:318] Caches are synced for disruption
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.411470       1 shared_informer.go:318] Caches are synced for deployment
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.415591       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.419985       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.420028       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.422567       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.426386       1 shared_informer.go:318] Caches are synced for node
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.426502       1 range_allocator.go:174] "Sending events to api server"
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.426637       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.426705       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.426892       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.426546       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.429986       1 shared_informer.go:318] Caches are synced for service account
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.430014       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.433506       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.437710       1 shared_informer.go:318] Caches are synced for TTL
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.445429       1 shared_informer.go:318] Caches are synced for HPA
	I0318 13:11:00.971866    2404 command_runner.go:130] ! I0318 13:10:01.448863       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 13:11:00.972427    2404 command_runner.go:130] ! I0318 13:10:01.451599       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 13:11:00.972427    2404 command_runner.go:130] ! I0318 13:10:01.454157       1 shared_informer.go:318] Caches are synced for taint
	I0318 13:11:00.972427    2404 command_runner.go:130] ! I0318 13:10:01.454304       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 13:11:00.972427    2404 command_runner.go:130] ! I0318 13:10:01.454496       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 13:11:00.972427    2404 command_runner.go:130] ! I0318 13:10:01.454532       1 taint_manager.go:210] "Sending events to api server"
	I0318 13:11:00.972427    2404 command_runner.go:130] ! I0318 13:10:01.455374       1 event.go:307] "Event occurred" object="multinode-894400" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400 event: Registered Node multinode-894400 in Controller"
	I0318 13:11:00.972427    2404 command_runner.go:130] ! I0318 13:10:01.455390       1 event.go:307] "Event occurred" object="multinode-894400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller"
	I0318 13:11:00.972427    2404 command_runner.go:130] ! I0318 13:10:01.455400       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller"
	I0318 13:11:00.972576    2404 command_runner.go:130] ! I0318 13:10:01.456700       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 13:11:00.972576    2404 command_runner.go:130] ! I0318 13:10:01.456719       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 13:11:00.972576    2404 command_runner.go:130] ! I0318 13:10:01.457835       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 13:11:00.972576    2404 command_runner.go:130] ! I0318 13:10:01.457861       1 shared_informer.go:318] Caches are synced for job
	I0318 13:11:00.972576    2404 command_runner.go:130] ! I0318 13:10:01.458132       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 13:11:00.972576    2404 command_runner.go:130] ! I0318 13:10:01.499926       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 13:11:00.972701    2404 command_runner.go:130] ! I0318 13:10:01.502022       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400"
	I0318 13:11:00.972701    2404 command_runner.go:130] ! I0318 13:10:01.502582       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m02"
	I0318 13:11:00.972701    2404 command_runner.go:130] ! I0318 13:10:01.502665       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m03"
	I0318 13:11:00.972701    2404 command_runner.go:130] ! I0318 13:10:01.505439       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0318 13:11:00.972701    2404 command_runner.go:130] ! I0318 13:10:01.518153       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:11:00.972782    2404 command_runner.go:130] ! I0318 13:10:01.524442       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="116.887006ms"
	I0318 13:11:00.972782    2404 command_runner.go:130] ! I0318 13:10:01.526447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="62.302µs"
	I0318 13:11:00.972782    2404 command_runner.go:130] ! I0318 13:10:01.532190       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="124.57225ms"
	I0318 13:11:00.972782    2404 command_runner.go:130] ! I0318 13:10:01.532535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.501µs"
	I0318 13:11:00.972858    2404 command_runner.go:130] ! I0318 13:10:01.536870       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 13:11:00.972858    2404 command_runner.go:130] ! I0318 13:10:01.559571       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 13:11:00.972858    2404 command_runner.go:130] ! I0318 13:10:01.576497       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:11:00.972858    2404 command_runner.go:130] ! I0318 13:10:01.970420       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:11:00.972923    2404 command_runner.go:130] ! I0318 13:10:02.008120       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:11:00.972923    2404 command_runner.go:130] ! I0318 13:10:02.008146       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 13:11:00.972923    2404 command_runner.go:130] ! I0318 13:10:23.798396       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:00.972979    2404 command_runner.go:130] ! I0318 13:10:26.538088       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-456tm" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-456tm"
	I0318 13:11:00.973018    2404 command_runner.go:130] ! I0318 13:10:26.538124       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-c2997" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-c2997"
	I0318 13:11:00.973018    2404 command_runner.go:130] ! I0318 13:10:26.538134       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0318 13:11:00.973018    2404 command_runner.go:130] ! I0318 13:10:41.556645       1 event.go:307] "Event occurred" object="multinode-894400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-894400-m02 status is now: NodeNotReady"
	I0318 13:11:00.973018    2404 command_runner.go:130] ! I0318 13:10:41.569274       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8btgf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:00.973123    2404 command_runner.go:130] ! I0318 13:10:41.592766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="22.447202ms"
	I0318 13:11:00.973123    2404 command_runner.go:130] ! I0318 13:10:41.593427       1 event.go:307] "Event occurred" object="kube-system/kindnet-k5lpg" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:00.973123    2404 command_runner.go:130] ! I0318 13:10:41.595199       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="39.101µs"
	I0318 13:11:00.973123    2404 command_runner.go:130] ! I0318 13:10:41.617007       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-8bdmn" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:00.973123    2404 command_runner.go:130] ! I0318 13:10:54.102255       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.438427ms"
	I0318 13:11:00.973227    2404 command_runner.go:130] ! I0318 13:10:54.102713       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="266.302µs"
	I0318 13:11:00.973227    2404 command_runner.go:130] ! I0318 13:10:54.115993       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="210.701µs"
	I0318 13:11:00.973227    2404 command_runner.go:130] ! I0318 13:10:55.131550       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.807636ms"
	I0318 13:11:00.973227    2404 command_runner.go:130] ! I0318 13:10:55.131763       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.301µs"
	I0318 13:11:00.985831    2404 logs.go:123] Gathering logs for kubelet ...
	I0318 13:11:00.985831    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:11:01.012456    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 13:11:01.012456    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: I0318 13:09:39.912330    1399 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 13:11:01.012456    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: I0318 13:09:39.913472    1399 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:01.012456    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: I0318 13:09:39.914280    1399 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 13:11:01.012568    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: E0318 13:09:39.914469    1399 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 13:11:01.012568    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:11:01.012568    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 13:11:01.012747    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0318 13:11:01.012747    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 13:11:01.012747    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 13:11:01.012813    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: I0318 13:09:40.661100    1455 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 13:11:01.012813    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: I0318 13:09:40.661586    1455 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:01.012813    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: I0318 13:09:40.662255    1455 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 13:11:01.012882    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: E0318 13:09:40.662383    1455 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 13:11:01.012882    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:11:01.012957    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 13:11:01.012957    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 13:11:01.012957    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 13:11:01.013029    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.774439    1532 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 13:11:01.013029    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.775083    1532 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:01.013029    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.775946    1532 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 13:11:01.013104    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.785429    1532 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0318 13:11:01.013104    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.801370    1532 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:11:01.013104    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.849790    1532 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0318 13:11:01.013104    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851652    1532 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0318 13:11:01.013215    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851916    1532 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0318 13:11:01.013215    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851957    1532 topology_manager.go:138] "Creating topology manager with none policy"
	I0318 13:11:01.013276    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851967    1532 container_manager_linux.go:301] "Creating device plugin manager"
	I0318 13:11:01.013276    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.853347    1532 state_mem.go:36] "Initialized new in-memory state store"
	I0318 13:11:01.013276    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.855331    1532 kubelet.go:393] "Attempting to sync node with API server"
	I0318 13:11:01.013328    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.855456    1532 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0318 13:11:01.013328    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.856520    1532 kubelet.go:309] "Adding apiserver pod source"
	I0318 13:11:01.013328    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.856554    1532 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0318 13:11:01.013387    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.859153    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.013387    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.859647    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.013439    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.860993    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.013497    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.861168    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.013497    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.872782    1532 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="docker" version="25.0.4" apiVersion="v1"
	I0318 13:11:01.013547    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.875640    1532 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0318 13:11:01.013547    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.876823    1532 server.go:1232] "Started kubelet"
	I0318 13:11:01.013547    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.878282    1532 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0318 13:11:01.013619    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.879215    1532 server.go:462] "Adding debug handlers to kubelet server"
	I0318 13:11:01.013619    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.882881    1532 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
	I0318 13:11:01.013668    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.883660    1532 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0318 13:11:01.013668    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.878365    1532 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0318 13:11:01.013774    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.886734    1532 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-894400.17bddddee5b23bca", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-894400", UID:"multinode-894400", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-894400"}, FirstTimestamp:time.Date(2024, ti
me.March, 18, 13, 9, 42, 876797898, time.Local), LastTimestamp:time.Date(2024, time.March, 18, 13, 9, 42, 876797898, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-894400"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.30.130.156:8443: connect: connection refused'(may retry after sleeping)
	I0318 13:11:01.013774    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.886969    1532 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0318 13:11:01.013829    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.887086    1532 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0318 13:11:01.013829    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.907405    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.013878    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.907883    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.013878    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.910785    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="200ms"
	I0318 13:11:01.014059    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.959085    1532 reconciler_new.go:29] "Reconciler: start to sync state"
	I0318 13:11:01.014109    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.981490    1532 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0318 13:11:01.014165    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.981531    1532 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0318 13:11:01.014165    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.981561    1532 state_mem.go:36] "Initialized new in-memory state store"
	I0318 13:11:01.014165    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.982644    1532 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0318 13:11:01.014235    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.982700    1532 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0318 13:11:01.014235    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.982728    1532 policy_none.go:49] "None policy: Start"
	I0318 13:11:01.014235    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.989705    1532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0318 13:11:01.014235    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.002857    1532 memory_manager.go:169] "Starting memorymanager" policy="None"
	I0318 13:11:01.014306    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.003620    1532 state_mem.go:35] "Initializing new in-memory state store"
	I0318 13:11:01.014306    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.004623    1532 state_mem.go:75] "Updated machine memory state"
	I0318 13:11:01.014306    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.006120    1532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0318 13:11:01.014358    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.007397    1532 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0318 13:11:01.014358    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.008604    1532 kubelet.go:2303] "Starting kubelet main sync loop"
	I0318 13:11:01.014358    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.008971    1532 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0318 13:11:01.014420    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.016115    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:11:01.014420    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.018685    1532 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 13:11:01.014420    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 13:11:01.014483    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 13:11:01.014483    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 13:11:01.014483    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 13:11:01.014544    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.021241    1532 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0318 13:11:01.014544    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: W0318 13:09:43.022840    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.014607    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.022916    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.014607    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.022979    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:11:01.014674    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.023116    1532 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0318 13:11:01.014674    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.041923    1532 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-894400\" not found"
	I0318 13:11:01.014727    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.112352    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="400ms"
	I0318 13:11:01.014727    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.113553    1532 topology_manager.go:215] "Topology Admit Handler" podUID="1c745e9b917877b1ff3c90ed02e9a79a" podNamespace="kube-system" podName="kube-scheduler-multinode-894400"
	I0318 13:11:01.014787    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.126661    1532 topology_manager.go:215] "Topology Admit Handler" podUID="6096c2227c4230453f65f86ebdcd0d95" podNamespace="kube-system" podName="kube-apiserver-multinode-894400"
	I0318 13:11:01.014856    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.137838    1532 topology_manager.go:215] "Topology Admit Handler" podUID="d340aced56ba169ecac1e3ac58ad57fe" podNamespace="kube-system" podName="kube-controller-manager-multinode-894400"
	I0318 13:11:01.014856    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.154701    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5485f509825d9272a84959cbcfbb4f0187be886867ba7bac76fa00a35e34bdd1"
	I0318 13:11:01.014930    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.154826    1532 topology_manager.go:215] "Topology Admit Handler" podUID="743a549b698f93b8586a236f83c90556" podNamespace="kube-system" podName="etcd-multinode-894400"
	I0318 13:11:01.014930    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171660    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d001e299e996b74f090c73f4e98605ef0d323a96826d23424e724ca8e7fe466a"
	I0318 13:11:01.014982    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171681    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60e9cd749c8f67d0bc24596b26b654cf85a82055f89e14c4a14a4e9342f5fc9f"
	I0318 13:11:01.014982    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171704    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acffce2e73842c3e46177a77ddd5a8d308b51daf062cac439cc487cc863c4226"
	I0318 13:11:01.015041    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171714    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="265b39e386cfa82eef9715aba314fbf8a9292776816cf86ed4099004698cb320"
	I0318 13:11:01.015041    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171723    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="220884cbf1f5b852987c5a28277a4914502f0623413c284054afa92791494c50"
	I0318 13:11:01.015095    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171731    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a47b1fb60692cee0c4ed89ecc511fa046c0873051f7daf026f1c5c6a3dfd7352"
	I0318 13:11:01.015095    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.172283    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82710777e700c4f2e71da911834959efc480f8ba2a526049f0f6c238947c5146"
	I0318 13:11:01.015154    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.186382    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a23c1189be7c39f22c8a59ba41b198519894c9783ecad8dcda79fb8a76f05254"
	I0318 13:11:01.015154    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.231617    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:11:01.015207    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.233479    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:11:01.015207    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.267903    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c745e9b917877b1ff3c90ed02e9a79a-kubeconfig\") pod \"kube-scheduler-multinode-894400\" (UID: \"1c745e9b917877b1ff3c90ed02e9a79a\") " pod="kube-system/kube-scheduler-multinode-894400"
	I0318 13:11:01.015280    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268106    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6096c2227c4230453f65f86ebdcd0d95-ca-certs\") pod \"kube-apiserver-multinode-894400\" (UID: \"6096c2227c4230453f65f86ebdcd0d95\") " pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:11:01.015329    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268214    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-ca-certs\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:11:01.015386    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268242    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-kubeconfig\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:11:01.015444    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268269    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:11:01.015444    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268295    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/743a549b698f93b8586a236f83c90556-etcd-certs\") pod \"etcd-multinode-894400\" (UID: \"743a549b698f93b8586a236f83c90556\") " pod="kube-system/etcd-multinode-894400"
	I0318 13:11:01.015500    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268330    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/743a549b698f93b8586a236f83c90556-etcd-data\") pod \"etcd-multinode-894400\" (UID: \"743a549b698f93b8586a236f83c90556\") " pod="kube-system/etcd-multinode-894400"
	I0318 13:11:01.015551    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268361    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6096c2227c4230453f65f86ebdcd0d95-k8s-certs\") pod \"kube-apiserver-multinode-894400\" (UID: \"6096c2227c4230453f65f86ebdcd0d95\") " pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:11:01.015609    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268423    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6096c2227c4230453f65f86ebdcd0d95-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-894400\" (UID: \"6096c2227c4230453f65f86ebdcd0d95\") " pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:11:01.015609    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268445    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-flexvolume-dir\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:11:01.015668    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268537    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-k8s-certs\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:11:01.015726    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.513563    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="800ms"
	I0318 13:11:01.015726    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.656950    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:11:01.015777    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.658595    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:11:01.015777    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: W0318 13:09:43.917173    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.015834    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.917511    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.015834    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: W0318 13:09:44.022640    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.015892    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.022973    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.015947    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: W0318 13:09:44.114653    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.015998    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.114784    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.015998    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: I0318 13:09:44.229821    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="354f3c44a34fcbd9ca7826066f2b4c40b3a6551aa632389815c5d24e85e0472b"
	I0318 13:11:01.016054    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.315351    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="1.6s"
	I0318 13:11:01.016104    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: W0318 13:09:44.368370    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.016104    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.368575    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:01.016161    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: I0318 13:09:44.495686    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:11:01.016161    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.496847    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:11:01.016211    2404 command_runner.go:130] > Mar 18 13:09:46 multinode-894400 kubelet[1532]: I0318 13:09:46.112867    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:11:01.016211    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.454296    1532 kubelet_node_status.go:108] "Node was previously registered" node="multinode-894400"
	I0318 13:11:01.016211    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.454504    1532 kubelet_node_status.go:73] "Successfully registered node" node="multinode-894400"
	I0318 13:11:01.016267    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.466215    1532 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0318 13:11:01.016267    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.467399    1532 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0318 13:11:01.016321    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.481710    1532 setters.go:552] "Node became not ready" node="multinode-894400" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-18T13:09:48Z","lastTransitionTime":"2024-03-18T13:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0318 13:11:01.016321    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.865400    1532 apiserver.go:52] "Watching apiserver"
	I0318 13:11:01.016377    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872433    1532 topology_manager.go:215] "Topology Admit Handler" podUID="0afe25f8-cbd6-412b-8698-7b547d1d49ca" podNamespace="kube-system" podName="kube-proxy-mc5tv"
	I0318 13:11:01.016377    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872584    1532 topology_manager.go:215] "Topology Admit Handler" podUID="0161d239-2d85-4246-b2fa-6c7374f2ecd6" podNamespace="kube-system" podName="kindnet-hhsxh"
	I0318 13:11:01.016429    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872794    1532 topology_manager.go:215] "Topology Admit Handler" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67" podNamespace="kube-system" podName="coredns-5dd5756b68-456tm"
	I0318 13:11:01.016429    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872862    1532 topology_manager.go:215] "Topology Admit Handler" podUID="219bafbc-d807-44cf-9927-e4957f36ad70" podNamespace="kube-system" podName="storage-provisioner"
	I0318 13:11:01.016485    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872944    1532 topology_manager.go:215] "Topology Admit Handler" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f" podNamespace="default" podName="busybox-5b5d89c9d6-c2997"
	I0318 13:11:01.016485    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.873248    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.016536    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.873593    1532 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-894400" podUID="62aca0ea-36b0-4841-9616-61448f45e04a"
	I0318 13:11:01.016592    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.873861    1532 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-894400" podUID="672a85d9-7526-4870-a33a-eac509ef3c3f"
	I0318 13:11:01.016592    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.876751    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.016697    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.889248    1532 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0318 13:11:01.016697    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.964782    1532 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:11:01.016747    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.965861    1532 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-894400"
	I0318 13:11:01.016747    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966709    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0afe25f8-cbd6-412b-8698-7b547d1d49ca-lib-modules\") pod \"kube-proxy-mc5tv\" (UID: \"0afe25f8-cbd6-412b-8698-7b547d1d49ca\") " pod="kube-system/kube-proxy-mc5tv"
	I0318 13:11:01.016804    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966761    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/219bafbc-d807-44cf-9927-e4957f36ad70-tmp\") pod \"storage-provisioner\" (UID: \"219bafbc-d807-44cf-9927-e4957f36ad70\") " pod="kube-system/storage-provisioner"
	I0318 13:11:01.016804    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966802    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0161d239-2d85-4246-b2fa-6c7374f2ecd6-cni-cfg\") pod \"kindnet-hhsxh\" (UID: \"0161d239-2d85-4246-b2fa-6c7374f2ecd6\") " pod="kube-system/kindnet-hhsxh"
	I0318 13:11:01.016856    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966847    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0afe25f8-cbd6-412b-8698-7b547d1d49ca-xtables-lock\") pod \"kube-proxy-mc5tv\" (UID: \"0afe25f8-cbd6-412b-8698-7b547d1d49ca\") " pod="kube-system/kube-proxy-mc5tv"
	I0318 13:11:01.016912    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966908    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0161d239-2d85-4246-b2fa-6c7374f2ecd6-xtables-lock\") pod \"kindnet-hhsxh\" (UID: \"0161d239-2d85-4246-b2fa-6c7374f2ecd6\") " pod="kube-system/kindnet-hhsxh"
	I0318 13:11:01.016912    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966943    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0161d239-2d85-4246-b2fa-6c7374f2ecd6-lib-modules\") pod \"kindnet-hhsxh\" (UID: \"0161d239-2d85-4246-b2fa-6c7374f2ecd6\") " pod="kube-system/kindnet-hhsxh"
	I0318 13:11:01.016985    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.968339    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:01.017042    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.968477    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:49.468437755 +0000 UTC m=+6.779274091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:01.017042    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.000742    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017094    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.000961    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017094    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.001575    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:49.501554367 +0000 UTC m=+6.812390603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017094    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.048369    1532 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c396fd459c503d2e9464c73cc841d3d8" path="/var/lib/kubelet/pods/c396fd459c503d2e9464c73cc841d3d8/volumes"
	I0318 13:11:01.017094    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.051334    1532 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="decc1d942b4d81359bb79c0349ffe9bb" path="/var/lib/kubelet/pods/decc1d942b4d81359bb79c0349ffe9bb/volumes"
	I0318 13:11:01.017238    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.248524    1532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-894400" podStartSLOduration=0.2483832 podCreationTimestamp="2024-03-18 13:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 13:09:49.21292898 +0000 UTC m=+6.523765316" watchObservedRunningTime="2024-03-18 13:09:49.2483832 +0000 UTC m=+6.559219436"
	I0318 13:11:01.017317    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.285710    1532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-894400" podStartSLOduration=0.285684326 podCreationTimestamp="2024-03-18 13:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 13:09:49.252285313 +0000 UTC m=+6.563121649" watchObservedRunningTime="2024-03-18 13:09:49.285684326 +0000 UTC m=+6.596520662"
	I0318 13:11:01.017317    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.471617    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:01.017376    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.472236    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:50.471713653 +0000 UTC m=+7.782549889 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:01.017417    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.573240    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017493    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.573347    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017562    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.573459    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:50.573441997 +0000 UTC m=+7.884278233 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017594    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.813611    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41035eff3b7db97e56ba25be141bf644acea4ddcb9d92ba3b421e6fa855a09af"
	I0318 13:11:01.017625    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: I0318 13:09:50.142572    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86d74dec812cfd4842de7473d45ff320bfb2283d09d2d05eee29e45cbde9f5e9"
	I0318 13:11:01.017646    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: I0318 13:09:50.219092    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9f21749669fe37a2d5bee9e7f45bf84eca6ae55870de8ac18bc774db7f81643"
	I0318 13:11:01.017684    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.481085    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:01.017721    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.481271    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:52.48125246 +0000 UTC m=+9.792088696 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:01.017765    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.581790    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017765    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.581835    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.581885    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:52.5818703 +0000 UTC m=+9.892706536 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:51 multinode-894400 kubelet[1532]: E0318 13:09:51.011273    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:51 multinode-894400 kubelet[1532]: E0318 13:09:51.012015    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.499973    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.500149    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:56.500131973 +0000 UTC m=+13.810968209 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.601982    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.602006    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.602087    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:56.602073317 +0000 UTC m=+13.912909553 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:53 multinode-894400 kubelet[1532]: E0318 13:09:53.009672    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:53 multinode-894400 kubelet[1532]: E0318 13:09:53.010317    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:55 multinode-894400 kubelet[1532]: E0318 13:09:55.010917    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:55 multinode-894400 kubelet[1532]: E0318 13:09:55.011786    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.017833    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.539408    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:01.018375    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.539534    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:10:04.539515204 +0000 UTC m=+21.850351440 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:01.018447    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.639919    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.018447    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.639948    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.018531    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.639998    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:10:04.639981843 +0000 UTC m=+21.950818079 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.018585    2404 command_runner.go:130] > Mar 18 13:09:57 multinode-894400 kubelet[1532]: E0318 13:09:57.009521    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.018640    2404 command_runner.go:130] > Mar 18 13:09:57 multinode-894400 kubelet[1532]: E0318 13:09:57.010257    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.018640    2404 command_runner.go:130] > Mar 18 13:09:59 multinode-894400 kubelet[1532]: E0318 13:09:59.011021    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.018698    2404 command_runner.go:130] > Mar 18 13:09:59 multinode-894400 kubelet[1532]: E0318 13:09:59.011361    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.018698    2404 command_runner.go:130] > Mar 18 13:10:01 multinode-894400 kubelet[1532]: E0318 13:10:01.009167    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.018748    2404 command_runner.go:130] > Mar 18 13:10:01 multinode-894400 kubelet[1532]: E0318 13:10:01.009678    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.018804    2404 command_runner.go:130] > Mar 18 13:10:03 multinode-894400 kubelet[1532]: E0318 13:10:03.010168    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.018854    2404 command_runner.go:130] > Mar 18 13:10:03 multinode-894400 kubelet[1532]: E0318 13:10:03.011736    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.018854    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.603257    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:01.018909    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.603387    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:10:20.60337037 +0000 UTC m=+37.914206606 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:01.018960    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.704132    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.018960    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.704169    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.019034    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.704219    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:10:20.704204798 +0000 UTC m=+38.015041034 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.019082    2404 command_runner.go:130] > Mar 18 13:10:05 multinode-894400 kubelet[1532]: E0318 13:10:05.009461    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.019127    2404 command_runner.go:130] > Mar 18 13:10:05 multinode-894400 kubelet[1532]: E0318 13:10:05.010204    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.019154    2404 command_runner.go:130] > Mar 18 13:10:07 multinode-894400 kubelet[1532]: E0318 13:10:07.009925    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.019193    2404 command_runner.go:130] > Mar 18 13:10:07 multinode-894400 kubelet[1532]: E0318 13:10:07.010942    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.019263    2404 command_runner.go:130] > Mar 18 13:10:09 multinode-894400 kubelet[1532]: E0318 13:10:09.010506    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.019312    2404 command_runner.go:130] > Mar 18 13:10:09 multinode-894400 kubelet[1532]: E0318 13:10:09.011883    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.019312    2404 command_runner.go:130] > Mar 18 13:10:11 multinode-894400 kubelet[1532]: E0318 13:10:11.009145    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.019364    2404 command_runner.go:130] > Mar 18 13:10:11 multinode-894400 kubelet[1532]: E0318 13:10:11.011730    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.019424    2404 command_runner.go:130] > Mar 18 13:10:13 multinode-894400 kubelet[1532]: E0318 13:10:13.010103    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.019424    2404 command_runner.go:130] > Mar 18 13:10:13 multinode-894400 kubelet[1532]: E0318 13:10:13.010921    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:15 multinode-894400 kubelet[1532]: E0318 13:10:15.009361    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:15 multinode-894400 kubelet[1532]: E0318 13:10:15.010565    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:17 multinode-894400 kubelet[1532]: E0318 13:10:17.009688    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:17 multinode-894400 kubelet[1532]: E0318 13:10:17.010200    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:19 multinode-894400 kubelet[1532]: E0318 13:10:19.010187    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:19 multinode-894400 kubelet[1532]: E0318 13:10:19.010730    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.639546    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.639747    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:10:52.639723825 +0000 UTC m=+69.950560161 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.740353    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.740517    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.740585    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:10:52.740566824 +0000 UTC m=+70.051403160 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: E0318 13:10:21.010015    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: E0318 13:10:21.011108    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: I0318 13:10:21.647969    1532 scope.go:117] "RemoveContainer" containerID="a2c499223090cc38a7b425469621fb6c8dbc443ab7eb0d5841f1fdcea2922366"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: I0318 13:10:21.651387    1532 scope.go:117] "RemoveContainer" containerID="46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264"
	I0318 13:11:01.019480    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: E0318 13:10:21.652104    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(219bafbc-d807-44cf-9927-e4957f36ad70)\"" pod="kube-system/storage-provisioner" podUID="219bafbc-d807-44cf-9927-e4957f36ad70"
	I0318 13:11:01.020133    2404 command_runner.go:130] > Mar 18 13:10:23 multinode-894400 kubelet[1532]: E0318 13:10:23.010116    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:01.020237    2404 command_runner.go:130] > Mar 18 13:10:23 multinode-894400 kubelet[1532]: E0318 13:10:23.010816    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:01.020237    2404 command_runner.go:130] > Mar 18 13:10:23 multinode-894400 kubelet[1532]: I0318 13:10:23.777913    1532 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	I0318 13:11:01.020237    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 kubelet[1532]: I0318 13:10:35.009532    1532 scope.go:117] "RemoveContainer" containerID="46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264"
	I0318 13:11:01.020237    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]: I0318 13:10:43.012571    1532 scope.go:117] "RemoveContainer" containerID="56d1819beb10ed198593d8a369f601faf82bf81ff1aecdbffe7114cd1265351b"
	I0318 13:11:01.020237    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]: E0318 13:10:43.030354    1532 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 13:11:01.020237    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 13:11:01.020237    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 13:11:01.020237    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 13:11:01.020237    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 13:11:01.020237    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]: I0318 13:10:43.056417    1532 scope.go:117] "RemoveContainer" containerID="c51f768a2f642fdffc6de67f101be5abd8bbaec83ef13011b47efab5aad27134"
	I0318 13:11:01.059892    2404 logs.go:123] Gathering logs for etcd [5f0887d1e691] ...
	I0318 13:11:01.059892    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0887d1e691"
	I0318 13:11:01.087769    2404 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T13:09:44.778754Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 13:11:01.088679    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.779618Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.30.130.156:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.30.130.156:2380","--initial-cluster=multinode-894400=https://172.30.130.156:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.30.130.156:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.30.130.156:2380","--name=multinode-894400","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0318 13:11:01.088679    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.780287Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0318 13:11:01.088679    2404 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T13:09:44.780316Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 13:11:01.088679    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.780326Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.30.130.156:2380"]}
	I0318 13:11:01.088679    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.780518Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 13:11:01.088679    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.782775Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.30.130.156:2379"]}
	I0318 13:11:01.088679    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.785511Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-894400","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.30.130.156:2380"],"listen-peer-urls":["https://172.30.130.156:2380"],"advertise-client-urls":["https://172.30.130.156:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.30.130.156:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"init
ial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0318 13:11:01.088679    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.809621Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"22.951578ms"}
	I0318 13:11:01.089225    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.849189Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0318 13:11:01.089225    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.872854Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"2db881e830cc2153","local-member-id":"c2557cd98fa8d31a","commit-index":1981}
	I0318 13:11:01.089225    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.87358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a switched to configuration voters=()"}
	I0318 13:11:01.089225    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.873736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became follower at term 2"}
	I0318 13:11:01.089225    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.873929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft c2557cd98fa8d31a [peers: [], term: 2, commit: 1981, applied: 0, lastindex: 1981, lastterm: 2]"}
	I0318 13:11:01.089367    2404 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T13:09:44.887865Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	I0318 13:11:01.089367    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.892732Z","caller":"mvcc/kvstore.go:323","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1376}
	I0318 13:11:01.089367    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.89955Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1715}
	I0318 13:11:01.089367    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.914592Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0318 13:11:01.089466    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.926835Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"c2557cd98fa8d31a","timeout":"7s"}
	I0318 13:11:01.089466    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.928545Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"c2557cd98fa8d31a"}
	I0318 13:11:01.089466    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.930225Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"c2557cd98fa8d31a","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	I0318 13:11:01.089466    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.930859Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	I0318 13:11:01.089556    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.931762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a switched to configuration voters=(14003235890238378778)"}
	I0318 13:11:01.089556    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.932128Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2db881e830cc2153","local-member-id":"c2557cd98fa8d31a","added-peer-id":"c2557cd98fa8d31a","added-peer-peer-urls":["https://172.30.129.141:2380"]}
	I0318 13:11:01.089633    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.933388Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2db881e830cc2153","local-member-id":"c2557cd98fa8d31a","cluster-version":"3.5"}
	I0318 13:11:01.089633    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.933717Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0318 13:11:01.089633    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.946226Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0318 13:11:01.089736    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.947818Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0318 13:11:01.089736    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.948803Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0318 13:11:01.089826    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.954567Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 13:11:01.089826    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.954988Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"c2557cd98fa8d31a","initial-advertise-peer-urls":["https://172.30.130.156:2380"],"listen-peer-urls":["https://172.30.130.156:2380"],"advertise-client-urls":["https://172.30.130.156:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.30.130.156:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0318 13:11:01.089912    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.955173Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0318 13:11:01.089912    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.954599Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.30.130.156:2380"}
	I0318 13:11:01.089912    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.956126Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.30.130.156:2380"}
	I0318 13:11:01.089912    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a is starting a new election at term 2"}
	I0318 13:11:01.090001    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became pre-candidate at term 2"}
	I0318 13:11:01.090001    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a received MsgPreVoteResp from c2557cd98fa8d31a at term 2"}
	I0318 13:11:01.090001    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became candidate at term 3"}
	I0318 13:11:01.090001    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.77574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a received MsgVoteResp from c2557cd98fa8d31a at term 3"}
	I0318 13:11:01.090001    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became leader at term 3"}
	I0318 13:11:01.090001    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c2557cd98fa8d31a elected leader c2557cd98fa8d31a at term 3"}
	I0318 13:11:01.090105    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.782683Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c2557cd98fa8d31a","local-member-attributes":"{Name:multinode-894400 ClientURLs:[https://172.30.130.156:2379]}","request-path":"/0/members/c2557cd98fa8d31a/attributes","cluster-id":"2db881e830cc2153","publish-timeout":"7s"}
	I0318 13:11:01.090105    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.78269Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 13:11:01.090105    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.782706Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 13:11:01.090105    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.783976Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.30.130.156:2379"}
	I0318 13:11:01.090205    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.783993Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0318 13:11:01.090205    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.788664Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0318 13:11:01.090205    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.788817Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0318 13:11:01.097528    2404 logs.go:123] Gathering logs for kube-scheduler [66ee8be9fada] ...
	I0318 13:11:01.097588    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ee8be9fada"
	I0318 13:11:01.122600    2404 command_runner.go:130] ! I0318 13:09:45.699415       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:11:01.122600    2404 command_runner.go:130] ! W0318 13:09:48.342100       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 13:11:01.122914    2404 command_runner.go:130] ! W0318 13:09:48.342243       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:01.122914    2404 command_runner.go:130] ! W0318 13:09:48.342324       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 13:11:01.122987    2404 command_runner.go:130] ! W0318 13:09:48.342374       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:11:01.122987    2404 command_runner.go:130] ! I0318 13:09:48.402495       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 13:11:01.122987    2404 command_runner.go:130] ! I0318 13:09:48.402540       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:01.122987    2404 command_runner.go:130] ! I0318 13:09:48.407228       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:11:01.122987    2404 command_runner.go:130] ! I0318 13:09:48.409117       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:11:01.122987    2404 command_runner.go:130] ! I0318 13:09:48.410197       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 13:11:01.123058    2404 command_runner.go:130] ! I0318 13:09:48.410738       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:11:01.123058    2404 command_runner.go:130] ! I0318 13:09:48.510577       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:11:01.125506    2404 logs.go:123] Gathering logs for kube-proxy [9335855aab63] ...
	I0318 13:11:01.125577    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335855aab63"
	I0318 13:11:01.158945    2404 command_runner.go:130] ! I0318 12:47:42.888603       1 server_others.go:69] "Using iptables proxy"
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.909658       1 node.go:141] Successfully retrieved node IP: 172.30.129.141
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.965774       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.965824       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.983172       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.983221       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.983471       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.983484       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.987719       1 config.go:188] "Starting service config controller"
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.987733       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.987775       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:11:01.159256    2404 command_runner.go:130] ! I0318 12:47:42.987781       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:11:01.159833    2404 command_runner.go:130] ! I0318 12:47:42.988298       1 config.go:315] "Starting node config controller"
	I0318 13:11:01.159833    2404 command_runner.go:130] ! I0318 12:47:42.988306       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:11:01.159833    2404 command_runner.go:130] ! I0318 12:47:43.088485       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:11:01.159833    2404 command_runner.go:130] ! I0318 12:47:43.088594       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:11:01.159833    2404 command_runner.go:130] ! I0318 12:47:43.088517       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:11:01.161255    2404 logs.go:123] Gathering logs for coredns [3c3bc988c74c] ...
	I0318 13:11:01.161255    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c3bc988c74c"
	I0318 13:11:01.188865    2404 command_runner.go:130] > .:53
	I0318 13:11:01.188865    2404 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fa2b120b3007aba3ef70c07d73473d14986404bfc5683cceeb89e8950d5ace894afca7d43365324975f99d1b2da3d6473c722fe82795d39a4ee8b4b969b7aa71
	I0318 13:11:01.188865    2404 command_runner.go:130] > CoreDNS-1.10.1
	I0318 13:11:01.188865    2404 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 13:11:01.188865    2404 command_runner.go:130] > [INFO] 127.0.0.1:47251 - 801 "HINFO IN 2968659138506762197.6766024496084331989. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051583557s
	I0318 13:11:01.188865    2404 logs.go:123] Gathering logs for kube-controller-manager [7aa5cf4ec378] ...
	I0318 13:11:01.188865    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa5cf4ec378"
	I0318 13:11:01.215116    2404 command_runner.go:130] ! I0318 12:47:22.447675       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:11:01.216007    2404 command_runner.go:130] ! I0318 12:47:22.964394       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 13:11:01.216007    2404 command_runner.go:130] ! I0318 12:47:22.964509       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:01.216007    2404 command_runner.go:130] ! I0318 12:47:22.966671       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:11:01.216007    2404 command_runner.go:130] ! I0318 12:47:22.967091       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:11:01.216007    2404 command_runner.go:130] ! I0318 12:47:22.968348       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 13:11:01.216132    2404 command_runner.go:130] ! I0318 12:47:22.969286       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:11:01.216132    2404 command_runner.go:130] ! I0318 12:47:27.391471       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 13:11:01.216159    2404 command_runner.go:130] ! I0318 12:47:27.423488       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 13:11:01.216159    2404 command_runner.go:130] ! I0318 12:47:27.424256       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 13:11:01.216159    2404 command_runner.go:130] ! I0318 12:47:27.424289       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:11:01.216159    2404 command_runner.go:130] ! I0318 12:47:27.424374       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 13:11:01.216256    2404 command_runner.go:130] ! I0318 12:47:27.451725       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 13:11:01.216256    2404 command_runner.go:130] ! I0318 12:47:27.451967       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 13:11:01.216256    2404 command_runner.go:130] ! I0318 12:47:27.452425       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 13:11:01.216256    2404 command_runner.go:130] ! I0318 12:47:27.464873       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 13:11:01.216256    2404 command_runner.go:130] ! I0318 12:47:27.465150       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 13:11:01.216318    2404 command_runner.go:130] ! I0318 12:47:27.465172       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 13:11:01.216458    2404 command_runner.go:130] ! I0318 12:47:27.491949       1 shared_informer.go:318] Caches are synced for tokens
	I0318 13:11:01.216458    2404 command_runner.go:130] ! I0318 12:47:37.491900       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 13:11:01.216458    2404 command_runner.go:130] ! I0318 12:47:37.492009       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 13:11:01.216458    2404 command_runner.go:130] ! I0318 12:47:37.492602       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 13:11:01.216458    2404 command_runner.go:130] ! I0318 12:47:37.492659       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 13:11:01.216458    2404 command_runner.go:130] ! E0318 12:47:37.494780       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 13:11:01.216558    2404 command_runner.go:130] ! I0318 12:47:37.494859       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 13:11:01.216558    2404 command_runner.go:130] ! I0318 12:47:37.511992       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 13:11:01.216558    2404 command_runner.go:130] ! I0318 12:47:37.512162       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 13:11:01.216558    2404 command_runner.go:130] ! I0318 12:47:37.512576       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 13:11:01.216746    2404 command_runner.go:130] ! I0318 12:47:37.525022       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 13:11:01.216746    2404 command_runner.go:130] ! I0318 12:47:37.525273       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 13:11:01.216746    2404 command_runner.go:130] ! I0318 12:47:37.525287       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 13:11:01.216746    2404 command_runner.go:130] ! I0318 12:47:37.540701       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 13:11:01.216746    2404 command_runner.go:130] ! I0318 12:47:37.540905       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 13:11:01.216746    2404 command_runner.go:130] ! I0318 12:47:37.540914       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 13:11:01.217297    2404 command_runner.go:130] ! I0318 12:47:37.562000       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 13:11:01.217526    2404 command_runner.go:130] ! I0318 12:47:37.562256       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 13:11:01.217526    2404 command_runner.go:130] ! I0318 12:47:37.562286       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 13:11:01.217611    2404 command_runner.go:130] ! I0318 12:47:37.574397       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 13:11:01.217815    2404 command_runner.go:130] ! I0318 12:47:37.574869       1 expand_controller.go:328] "Starting expand controller"
	I0318 13:11:01.218636    2404 command_runner.go:130] ! I0318 12:47:37.574937       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 13:11:01.218737    2404 command_runner.go:130] ! I0318 12:47:37.587914       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 13:11:01.218737    2404 command_runner.go:130] ! I0318 12:47:37.588166       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 13:11:01.218737    2404 command_runner.go:130] ! I0318 12:47:37.588199       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 13:11:01.218737    2404 command_runner.go:130] ! I0318 12:47:37.609721       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 13:11:01.218799    2404 command_runner.go:130] ! I0318 12:47:37.615354       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 13:11:01.218799    2404 command_runner.go:130] ! I0318 12:47:37.615371       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 13:11:01.218824    2404 command_runner.go:130] ! I0318 12:47:37.624660       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 13:11:01.218824    2404 command_runner.go:130] ! I0318 12:47:37.624898       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.625063       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.637461       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.637588       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.637699       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.649314       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.650380       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.650462       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.830447       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.830565       1 disruption.go:433] "Sending events to api server."
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.830686       1 disruption.go:444] "Starting disruption controller"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.830725       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.985254       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.985453       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:37.985784       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.288543       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.289132       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.289248       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.289520       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.289722       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.289927       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.290240       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.290340       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.290418       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.290502       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.290550       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.290591       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.290851       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.291026       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.291117       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.291149       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.291277       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.291315       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.291392       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 13:11:01.218852    2404 command_runner.go:130] ! I0318 12:47:38.291423       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 13:11:01.219501    2404 command_runner.go:130] ! I0318 12:47:38.291465       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 13:11:01.219728    2404 command_runner.go:130] ! I0318 12:47:38.291591       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 13:11:01.219728    2404 command_runner.go:130] ! I0318 12:47:38.291607       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:11:01.219728    2404 command_runner.go:130] ! I0318 12:47:38.291720       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 13:11:01.219793    2404 command_runner.go:130] ! I0318 12:47:38.436018       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 13:11:01.219793    2404 command_runner.go:130] ! I0318 12:47:38.436093       1 job_controller.go:226] "Starting job controller"
	I0318 13:11:01.219793    2404 command_runner.go:130] ! I0318 12:47:38.436112       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 13:11:01.219793    2404 command_runner.go:130] ! I0318 12:47:38.731490       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 13:11:01.219912    2404 command_runner.go:130] ! I0318 12:47:38.731606       1 horizontal.go:200] "Starting HPA controller"
	I0318 13:11:01.219954    2404 command_runner.go:130] ! I0318 12:47:38.731671       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 13:11:01.219954    2404 command_runner.go:130] ! I0318 12:47:38.886224       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 13:11:01.219954    2404 command_runner.go:130] ! I0318 12:47:38.886401       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 13:11:01.220022    2404 command_runner.go:130] ! I0318 12:47:38.886705       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 13:11:01.220022    2404 command_runner.go:130] ! I0318 12:47:38.930325       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 13:11:01.220022    2404 command_runner.go:130] ! I0318 12:47:38.930354       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 13:11:01.220085    2404 command_runner.go:130] ! I0318 12:47:38.930362       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 13:11:01.220085    2404 command_runner.go:130] ! I0318 12:47:38.930398       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 13:11:01.220085    2404 command_runner.go:130] ! I0318 12:47:39.085782       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 13:11:01.220085    2404 command_runner.go:130] ! I0318 12:47:39.085905       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 13:11:01.220144    2404 command_runner.go:130] ! I0318 12:47:39.085920       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 13:11:01.220144    2404 command_runner.go:130] ! I0318 12:47:39.236755       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 13:11:01.220144    2404 command_runner.go:130] ! I0318 12:47:39.237434       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 13:11:01.220144    2404 command_runner.go:130] ! I0318 12:47:39.237522       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 13:11:01.220207    2404 command_runner.go:130] ! I0318 12:47:39.390953       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 13:11:01.220207    2404 command_runner.go:130] ! I0318 12:47:39.391480       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 13:11:01.220207    2404 command_runner.go:130] ! I0318 12:47:39.391646       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 13:11:01.220207    2404 command_runner.go:130] ! I0318 12:47:39.535570       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 13:11:01.220265    2404 command_runner.go:130] ! I0318 12:47:39.536071       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 13:11:01.220265    2404 command_runner.go:130] ! I0318 12:47:39.536172       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 13:11:01.220265    2404 command_runner.go:130] ! I0318 12:47:39.582776       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 13:11:01.220265    2404 command_runner.go:130] ! I0318 12:47:39.582876       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 13:11:01.220341    2404 command_runner.go:130] ! I0318 12:47:39.582912       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:01.220341    2404 command_runner.go:130] ! I0318 12:47:39.584602       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 13:11:01.220341    2404 command_runner.go:130] ! I0318 12:47:39.584677       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 13:11:01.220341    2404 command_runner.go:130] ! I0318 12:47:39.584724       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:01.220396    2404 command_runner.go:130] ! I0318 12:47:39.585974       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 13:11:01.220396    2404 command_runner.go:130] ! I0318 12:47:39.585990       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 13:11:01.220396    2404 command_runner.go:130] ! I0318 12:47:39.586012       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:01.220454    2404 command_runner.go:130] ! I0318 12:47:39.586910       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 13:11:01.220454    2404 command_runner.go:130] ! I0318 12:47:39.586968       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 13:11:01.220454    2404 command_runner.go:130] ! I0318 12:47:39.586975       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 13:11:01.220454    2404 command_runner.go:130] ! I0318 12:47:39.587044       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:01.220507    2404 command_runner.go:130] ! I0318 12:47:39.735265       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 13:11:01.220507    2404 command_runner.go:130] ! I0318 12:47:39.735467       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 13:11:01.220507    2404 command_runner.go:130] ! I0318 12:47:39.735494       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 13:11:01.220507    2404 command_runner.go:130] ! I0318 12:47:39.735502       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 13:11:01.220507    2404 command_runner.go:130] ! I0318 12:47:39.783594       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 13:11:01.220567    2404 command_runner.go:130] ! I0318 12:47:39.783722       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 13:11:01.220567    2404 command_runner.go:130] ! I0318 12:47:39.783841       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 13:11:01.220567    2404 command_runner.go:130] ! I0318 12:47:39.783860       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 13:11:01.220567    2404 command_runner.go:130] ! I0318 12:47:39.784031       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 13:11:01.220622    2404 command_runner.go:130] ! E0318 12:47:39.937206       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 13:11:01.220622    2404 command_runner.go:130] ! I0318 12:47:39.937229       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 13:11:01.220622    2404 command_runner.go:130] ! I0318 12:47:40.089508       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 13:11:01.220622    2404 command_runner.go:130] ! I0318 12:47:40.089701       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 13:11:01.220697    2404 command_runner.go:130] ! I0318 12:47:40.089793       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 13:11:01.220697    2404 command_runner.go:130] ! I0318 12:47:40.235860       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 13:11:01.220697    2404 command_runner.go:130] ! I0318 12:47:40.235977       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 13:11:01.220697    2404 command_runner.go:130] ! I0318 12:47:40.236063       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 13:11:01.220697    2404 command_runner.go:130] ! I0318 12:47:40.386545       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 13:11:01.220749    2404 command_runner.go:130] ! I0318 12:47:40.386692       1 gc_controller.go:101] "Starting GC controller"
	I0318 13:11:01.220749    2404 command_runner.go:130] ! I0318 12:47:40.386704       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 13:11:01.220749    2404 command_runner.go:130] ! I0318 12:47:40.644175       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 13:11:01.220801    2404 command_runner.go:130] ! I0318 12:47:40.644284       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 13:11:01.220801    2404 command_runner.go:130] ! I0318 12:47:40.644293       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 13:11:01.220801    2404 command_runner.go:130] ! I0318 12:47:40.784991       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 13:11:01.220855    2404 command_runner.go:130] ! I0318 12:47:40.785464       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 13:11:01.220855    2404 command_runner.go:130] ! I0318 12:47:40.785492       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 13:11:01.220855    2404 command_runner.go:130] ! I0318 12:47:40.936785       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 13:11:01.220855    2404 command_runner.go:130] ! I0318 12:47:40.939800       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 13:11:01.220925    2404 command_runner.go:130] ! I0318 12:47:40.947184       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:11:01.220925    2404 command_runner.go:130] ! I0318 12:47:40.968017       1 shared_informer.go:318] Caches are synced for service account
	I0318 13:11:01.220925    2404 command_runner.go:130] ! I0318 12:47:40.971773       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:11:01.220925    2404 command_runner.go:130] ! I0318 12:47:40.976691       1 shared_informer.go:318] Caches are synced for expand
	I0318 13:11:01.220925    2404 command_runner.go:130] ! I0318 12:47:40.986014       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 13:11:01.220996    2404 command_runner.go:130] ! I0318 12:47:40.995675       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 13:11:01.220996    2404 command_runner.go:130] ! I0318 12:47:41.009015       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400\" does not exist"
	I0318 13:11:01.220996    2404 command_runner.go:130] ! I0318 12:47:41.012612       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 13:11:01.220996    2404 command_runner.go:130] ! I0318 12:47:41.016383       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 13:11:01.221057    2404 command_runner.go:130] ! I0318 12:47:41.025198       1 shared_informer.go:318] Caches are synced for TTL
	I0318 13:11:01.221057    2404 command_runner.go:130] ! I0318 12:47:41.025462       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 13:11:01.221057    2404 command_runner.go:130] ! I0318 12:47:41.032086       1 shared_informer.go:318] Caches are synced for HPA
	I0318 13:11:01.221057    2404 command_runner.go:130] ! I0318 12:47:41.036463       1 shared_informer.go:318] Caches are synced for job
	I0318 13:11:01.221057    2404 command_runner.go:130] ! I0318 12:47:41.036622       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 13:11:01.221057    2404 command_runner.go:130] ! I0318 12:47:41.036726       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 13:11:01.221116    2404 command_runner.go:130] ! I0318 12:47:41.037735       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 13:11:01.221116    2404 command_runner.go:130] ! I0318 12:47:41.037818       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 13:11:01.221116    2404 command_runner.go:130] ! I0318 12:47:41.040360       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 13:11:01.221116    2404 command_runner.go:130] ! I0318 12:47:41.041850       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 13:11:01.221173    2404 command_runner.go:130] ! I0318 12:47:41.045379       1 shared_informer.go:318] Caches are synced for namespace
	I0318 13:11:01.221173    2404 command_runner.go:130] ! I0318 12:47:41.051530       1 shared_informer.go:318] Caches are synced for deployment
	I0318 13:11:01.221173    2404 command_runner.go:130] ! I0318 12:47:41.053151       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 13:11:01.221173    2404 command_runner.go:130] ! I0318 12:47:41.063027       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 13:11:01.221227    2404 command_runner.go:130] ! I0318 12:47:41.084212       1 shared_informer.go:318] Caches are synced for taint
	I0318 13:11:01.221227    2404 command_runner.go:130] ! I0318 12:47:41.084612       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 13:11:01.221227    2404 command_runner.go:130] ! I0318 12:47:41.087983       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 13:11:01.221227    2404 command_runner.go:130] ! I0318 12:47:41.088464       1 taint_manager.go:210] "Sending events to api server"
	I0318 13:11:01.221227    2404 command_runner.go:130] ! I0318 12:47:41.089485       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400"
	I0318 13:11:01.221301    2404 command_runner.go:130] ! I0318 12:47:41.089526       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0318 13:11:01.221301    2404 command_runner.go:130] ! I0318 12:47:41.089552       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 13:11:01.221301    2404 command_runner.go:130] ! I0318 12:47:41.089942       1 shared_informer.go:318] Caches are synced for GC
	I0318 13:11:01.221301    2404 command_runner.go:130] ! I0318 12:47:41.090031       1 event.go:307] "Event occurred" object="multinode-894400" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400 event: Registered Node multinode-894400 in Controller"
	I0318 13:11:01.221354    2404 command_runner.go:130] ! I0318 12:47:41.090167       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 13:11:01.221354    2404 command_runner.go:130] ! I0318 12:47:41.090848       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 13:11:01.221354    2404 command_runner.go:130] ! I0318 12:47:41.092093       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 13:11:01.221354    2404 command_runner.go:130] ! I0318 12:47:41.092684       1 shared_informer.go:318] Caches are synced for node
	I0318 13:11:01.221354    2404 command_runner.go:130] ! I0318 12:47:41.093255       1 range_allocator.go:174] "Sending events to api server"
	I0318 13:11:01.221419    2404 command_runner.go:130] ! I0318 12:47:41.093537       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 13:11:01.221419    2404 command_runner.go:130] ! I0318 12:47:41.093851       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 13:11:01.221419    2404 command_runner.go:130] ! I0318 12:47:41.093958       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 13:11:01.221419    2404 command_runner.go:130] ! I0318 12:47:41.119414       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400" podCIDRs=["10.244.0.0/24"]
	I0318 13:11:01.221419    2404 command_runner.go:130] ! I0318 12:47:41.148134       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:11:01.221480    2404 command_runner.go:130] ! I0318 12:47:41.183853       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 13:11:01.221480    2404 command_runner.go:130] ! I0318 12:47:41.184949       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 13:11:01.221480    2404 command_runner.go:130] ! I0318 12:47:41.186043       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 13:11:01.221480    2404 command_runner.go:130] ! I0318 12:47:41.187192       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 13:11:01.221480    2404 command_runner.go:130] ! I0318 12:47:41.187229       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 13:11:01.221564    2404 command_runner.go:130] ! I0318 12:47:41.192066       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:11:01.221564    2404 command_runner.go:130] ! I0318 12:47:41.233781       1 shared_informer.go:318] Caches are synced for disruption
	I0318 13:11:01.221564    2404 command_runner.go:130] ! I0318 12:47:41.572914       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:11:01.221564    2404 command_runner.go:130] ! I0318 12:47:41.612936       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mc5tv"
	I0318 13:11:01.221623    2404 command_runner.go:130] ! I0318 12:47:41.615780       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hhsxh"
	I0318 13:11:01.221623    2404 command_runner.go:130] ! I0318 12:47:41.625871       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:11:01.221623    2404 command_runner.go:130] ! I0318 12:47:41.626335       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 13:11:01.221680    2404 command_runner.go:130] ! I0318 12:47:41.893141       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0318 13:11:01.221680    2404 command_runner.go:130] ! I0318 12:47:42.112244       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vl6jr"
	I0318 13:11:01.221680    2404 command_runner.go:130] ! I0318 12:47:42.148022       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-456tm"
	I0318 13:11:01.221741    2404 command_runner.go:130] ! I0318 12:47:42.181940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="289.6659ms"
	I0318 13:11:01.221741    2404 command_runner.go:130] ! I0318 12:47:42.245823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.840303ms"
	I0318 13:11:01.221741    2404 command_runner.go:130] ! I0318 12:47:42.246151       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.996µs"
	I0318 13:11:01.221741    2404 command_runner.go:130] ! I0318 12:47:42.470958       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0318 13:11:01.221797    2404 command_runner.go:130] ! I0318 12:47:42.530265       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-vl6jr"
	I0318 13:11:01.221797    2404 command_runner.go:130] ! I0318 12:47:42.551794       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.491503ms"
	I0318 13:11:01.221797    2404 command_runner.go:130] ! I0318 12:47:42.587026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="35.184179ms"
	I0318 13:11:01.221857    2404 command_runner.go:130] ! I0318 12:47:42.587126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.497µs"
	I0318 13:11:01.221857    2404 command_runner.go:130] ! I0318 12:47:52.958102       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="163.297µs"
	I0318 13:11:01.221857    2404 command_runner.go:130] ! I0318 12:47:52.991751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.399µs"
	I0318 13:11:01.221857    2404 command_runner.go:130] ! I0318 12:47:54.194916       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.289µs"
	I0318 13:11:01.221933    2404 command_runner.go:130] ! I0318 12:47:55.238088       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.595936ms"
	I0318 13:11:01.221933    2404 command_runner.go:130] ! I0318 12:47:55.238222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="45.592µs"
	I0318 13:11:01.221933    2404 command_runner.go:130] ! I0318 12:47:56.090728       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0318 13:11:01.221988    2404 command_runner.go:130] ! I0318 12:50:34.419485       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m02\" does not exist"
	I0318 13:11:01.221988    2404 command_runner.go:130] ! I0318 12:50:34.437576       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m02" podCIDRs=["10.244.1.0/24"]
	I0318 13:11:01.221988    2404 command_runner.go:130] ! I0318 12:50:34.454919       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8bdmn"
	I0318 13:11:01.221988    2404 command_runner.go:130] ! I0318 12:50:34.479103       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k5lpg"
	I0318 13:11:01.222056    2404 command_runner.go:130] ! I0318 12:50:36.121925       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m02"
	I0318 13:11:01.222056    2404 command_runner.go:130] ! I0318 12:50:36.122368       1 event.go:307] "Event occurred" object="multinode-894400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller"
	I0318 13:11:01.222110    2404 command_runner.go:130] ! I0318 12:50:52.539955       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:01.222110    2404 command_runner.go:130] ! I0318 12:51:17.964827       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0318 13:11:01.222110    2404 command_runner.go:130] ! I0318 12:51:17.986964       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-8btgf"
	I0318 13:11:01.222110    2404 command_runner.go:130] ! I0318 12:51:18.004592       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-c2997"
	I0318 13:11:01.222110    2404 command_runner.go:130] ! I0318 12:51:18.026894       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="62.79508ms"
	I0318 13:11:01.222193    2404 command_runner.go:130] ! I0318 12:51:18.045074       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="17.513513ms"
	I0318 13:11:01.222193    2404 command_runner.go:130] ! I0318 12:51:18.046404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="36.101µs"
	I0318 13:11:01.222219    2404 command_runner.go:130] ! I0318 12:51:18.054157       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="337.914µs"
	I0318 13:11:01.222219    2404 command_runner.go:130] ! I0318 12:51:18.060516       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="26.701µs"
	I0318 13:11:01.222260    2404 command_runner.go:130] ! I0318 12:51:20.804047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="10.125602ms"
	I0318 13:11:01.222260    2404 command_runner.go:130] ! I0318 12:51:20.804333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="159.502µs"
	I0318 13:11:01.222260    2404 command_runner.go:130] ! I0318 12:51:21.064706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="11.788417ms"
	I0318 13:11:01.222260    2404 command_runner.go:130] ! I0318 12:51:21.065229       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="82.401µs"
	I0318 13:11:01.222317    2404 command_runner.go:130] ! I0318 12:55:05.793350       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m03\" does not exist"
	I0318 13:11:01.222317    2404 command_runner.go:130] ! I0318 12:55:05.797095       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:01.222317    2404 command_runner.go:130] ! I0318 12:55:05.823205       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zv9tv"
	I0318 13:11:01.222371    2404 command_runner.go:130] ! I0318 12:55:05.835101       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m03" podCIDRs=["10.244.2.0/24"]
	I0318 13:11:01.222371    2404 command_runner.go:130] ! I0318 12:55:05.835149       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-745w9"
	I0318 13:11:01.222371    2404 command_runner.go:130] ! I0318 12:55:06.188986       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller"
	I0318 13:11:01.222425    2404 command_runner.go:130] ! I0318 12:55:06.188988       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m03"
	I0318 13:11:01.222425    2404 command_runner.go:130] ! I0318 12:55:23.671742       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:01.223029    2404 command_runner.go:130] ! I0318 13:02:46.325539       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:01.223029    2404 command_runner.go:130] ! I0318 13:02:46.325935       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-894400-m03 status is now: NodeNotReady"
	I0318 13:11:01.223029    2404 command_runner.go:130] ! I0318 13:02:46.344510       1 event.go:307] "Event occurred" object="kube-system/kindnet-zv9tv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:01.223625    2404 command_runner.go:130] ! I0318 13:02:46.368811       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-745w9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:01.223625    2404 command_runner.go:130] ! I0318 13:05:19.649225       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:01.223625    2404 command_runner.go:130] ! I0318 13:05:21.403124       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-894400-m03 event: Removing Node multinode-894400-m03 from Controller"
	I0318 13:11:01.223698    2404 command_runner.go:130] ! I0318 13:05:25.832056       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m03\" does not exist"
	I0318 13:11:01.223698    2404 command_runner.go:130] ! I0318 13:05:25.832348       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:01.223772    2404 command_runner.go:130] ! I0318 13:05:25.841443       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m03" podCIDRs=["10.244.3.0/24"]
	I0318 13:11:01.223772    2404 command_runner.go:130] ! I0318 13:05:26.404299       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller"
	I0318 13:11:01.223772    2404 command_runner.go:130] ! I0318 13:05:34.080951       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:01.223899    2404 command_runner.go:130] ! I0318 13:07:11.961036       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-894400-m03 status is now: NodeNotReady"
	I0318 13:11:01.223937    2404 command_runner.go:130] ! I0318 13:07:11.961077       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:01.223987    2404 command_runner.go:130] ! I0318 13:07:12.051526       1 event.go:307] "Event occurred" object="kube-system/kindnet-zv9tv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:01.223987    2404 command_runner.go:130] ! I0318 13:07:12.098168       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-745w9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:01.241587    2404 logs.go:123] Gathering logs for kindnet [c8e5ec25e910] ...
	I0318 13:11:01.241587    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e5ec25e910"
	I0318 13:11:01.267235    2404 command_runner.go:130] ! I0318 13:09:50.858529       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0318 13:11:01.267235    2404 command_runner.go:130] ! I0318 13:09:50.859271       1 main.go:107] hostIP = 172.30.130.156
	I0318 13:11:01.267235    2404 command_runner.go:130] ! podIP = 172.30.130.156
	I0318 13:11:01.267235    2404 command_runner.go:130] ! I0318 13:09:50.860380       1 main.go:116] setting mtu 1500 for CNI 
	I0318 13:11:01.267235    2404 command_runner.go:130] ! I0318 13:09:50.930132       1 main.go:146] kindnetd IP family: "ipv4"
	I0318 13:11:01.267235    2404 command_runner.go:130] ! I0318 13:09:50.933463       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0318 13:11:01.268209    2404 command_runner.go:130] ! I0318 13:10:21.283853       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0318 13:11:01.268209    2404 command_runner.go:130] ! I0318 13:10:21.335833       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:11:01.268209    2404 command_runner.go:130] ! I0318 13:10:21.335942       1 main.go:227] handling current node
	I0318 13:11:01.268209    2404 command_runner.go:130] ! I0318 13:10:21.336264       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:01.268265    2404 command_runner.go:130] ! I0318 13:10:21.336361       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:01.268293    2404 command_runner.go:130] ! I0318 13:10:21.336527       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.30.140.66 Flags: [] Table: 0} 
	I0318 13:11:01.268293    2404 command_runner.go:130] ! I0318 13:10:21.336670       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:01.268293    2404 command_runner.go:130] ! I0318 13:10:21.336680       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:01.268335    2404 command_runner.go:130] ! I0318 13:10:21.336727       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.30.137.140 Flags: [] Table: 0} 
	I0318 13:11:01.268335    2404 command_runner.go:130] ! I0318 13:10:31.343996       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:11:01.268335    2404 command_runner.go:130] ! I0318 13:10:31.344324       1 main.go:227] handling current node
	I0318 13:11:01.268335    2404 command_runner.go:130] ! I0318 13:10:31.344341       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:01.268403    2404 command_runner.go:130] ! I0318 13:10:31.344682       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:01.268403    2404 command_runner.go:130] ! I0318 13:10:31.345062       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:01.268403    2404 command_runner.go:130] ! I0318 13:10:31.345087       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:01.268403    2404 command_runner.go:130] ! I0318 13:10:41.357494       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:11:01.268454    2404 command_runner.go:130] ! I0318 13:10:41.357586       1 main.go:227] handling current node
	I0318 13:11:01.268495    2404 command_runner.go:130] ! I0318 13:10:41.357599       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:01.268538    2404 command_runner.go:130] ! I0318 13:10:41.357606       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:01.268538    2404 command_runner.go:130] ! I0318 13:10:41.357708       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:01.268538    2404 command_runner.go:130] ! I0318 13:10:41.357932       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:01.268538    2404 command_runner.go:130] ! I0318 13:10:51.367560       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:11:01.268595    2404 command_runner.go:130] ! I0318 13:10:51.367661       1 main.go:227] handling current node
	I0318 13:11:01.268595    2404 command_runner.go:130] ! I0318 13:10:51.367675       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:01.268595    2404 command_runner.go:130] ! I0318 13:10:51.367684       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:01.268595    2404 command_runner.go:130] ! I0318 13:10:51.367956       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:01.268646    2404 command_runner.go:130] ! I0318 13:10:51.368281       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:01.271026    2404 logs.go:123] Gathering logs for Docker ...
	I0318 13:11:01.271026    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:11:01.302972    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:11:01.303102    2404 command_runner.go:130] > Mar 18 13:08:19 minikube cri-dockerd[222]: time="2024-03-18T13:08:19Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:11:01.303241    2404 command_runner.go:130] > Mar 18 13:08:19 minikube cri-dockerd[222]: time="2024-03-18T13:08:19Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:19 minikube cri-dockerd[222]: time="2024-03-18T13:08:19Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:22 minikube cri-dockerd[412]: time="2024-03-18T13:08:22Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:22 minikube cri-dockerd[412]: time="2024-03-18T13:08:22Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:22 minikube cri-dockerd[412]: time="2024-03-18T13:08:22Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:24 minikube cri-dockerd[434]: time="2024-03-18T13:08:24Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:24 minikube cri-dockerd[434]: time="2024-03-18T13:08:24Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:24 minikube cri-dockerd[434]: time="2024-03-18T13:08:24Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 13:11:01.303429    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0318 13:11:01.304003    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 systemd[1]: Starting Docker Application Container Engine...
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[662]: time="2024-03-18T13:09:08.926008208Z" level=info msg="Starting up"
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[662]: time="2024-03-18T13:09:08.927042019Z" level=info msg="containerd not running, starting managed containerd"
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[662]: time="2024-03-18T13:09:08.928263831Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.958180831Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.981644866Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.981729667Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.981890169Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.982007470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.982683977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.982866878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983040880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983180882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983201082Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983210682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983772288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.984603896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987157222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:01.304205    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987245222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987380024Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987459025Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.988076231Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.988215332Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.988231932Z" level=info msg="metadata content store policy set" policy=shared
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994386894Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994536096Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994574296Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994587696Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994605296Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994669597Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995239203Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995378304Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995441205Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995564406Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995751508Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995819808Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995841009Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995857509Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995870509Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995903509Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995925809Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995942710Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995963610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995980410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.304820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996091811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305379    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996121511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996134612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996151212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996165012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996179412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996194912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996291913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996404914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996427114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996445915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996468515Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996497915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996538416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996560016Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997036721Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997287923Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997398924Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997518125Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.998045931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.998612736Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 13:11:01.305406    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.998643637Z" level=info msg="NRI interface is disabled by configuration."
	I0318 13:11:01.306074    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999395544Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 13:11:01.306156    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999606346Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999683147Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999765648Z" level=info msg="containerd successfully booted in 0.044672s"
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:09 multinode-894400 dockerd[662]: time="2024-03-18T13:09:09.982989696Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.138351976Z" level=info msg="Loading containers: start."
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.545129368Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.626119356Z" level=info msg="Loading containers: done."
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.653541890Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.654242899Z" level=info msg="Daemon has completed initialization"
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.702026381Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 systemd[1]: Started Docker Application Container Engine.
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.704980317Z" level=info msg="API listen on [::]:2376"
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 systemd[1]: Stopping Docker Application Container Engine...
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.118112316Z" level=info msg="Processing signal 'terminated'"
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120561724Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120708425Z" level=info msg="Daemon shutdown complete"
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120817525Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120965826Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 systemd[1]: docker.service: Deactivated successfully.
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 systemd[1]: Stopped Docker Application Container Engine.
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 systemd[1]: Starting Docker Application Container Engine...
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:36.188961030Z" level=info msg="Starting up"
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:36.190214934Z" level=info msg="containerd not running, starting managed containerd"
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:36.191301438Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1058
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.220111635Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244480717Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244510717Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 13:11:01.306221    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244539917Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 13:11:01.306776    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244552117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.306776    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244588817Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:01.306776    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244601217Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.306914    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244707818Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244791318Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244809418Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244818018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244838218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244975219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248195830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248302930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248446530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248548631Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248576331Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248593831Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248604331Z" level=info msg="metadata content store policy set" policy=shared
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.249888435Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.249971436Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.250624738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.250745538Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.250859739Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.251093339Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252590644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252685145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252703545Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252722945Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252736845Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252749745Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252793045Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.306942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252998846Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.307496    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253020946Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.307496    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253065546Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.307496    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253080846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253090746Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253177146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253201547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253215147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253229847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253243047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253257847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253270347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253284147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253297547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253313047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253331047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253344647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253357947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253374747Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253395147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253407847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253420947Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253503448Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253519848Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253532848Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253542748Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253613548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253652648Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253668048Z" level=info msg="NRI interface is disabled by configuration."
	I0318 13:11:01.307669    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254026949Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254474051Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254684152Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254775452Z" level=info msg="containerd successfully booted in 0.035926s"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.234846559Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.265734263Z" level=info msg="Loading containers: start."
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.543045299Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.620368360Z" level=info msg="Loading containers: done."
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.642056833Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.642227734Z" level=info msg="Daemon has completed initialization"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.686175082Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 systemd[1]: Started Docker Application Container Engine.
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.687135485Z" level=info msg="API listen on [::]:2376"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Loaded network plugin cni"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Docker Info: &{ID:5695bce5-a75b-48a7-87b1-d9b6b787473a Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2024-03-18T13:09:38.671342607Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00034fe30 NCPU:2 MemTotal:2216210432 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-894400 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0318 13:11:01.308262    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Start cri-dockerd grpc backend"
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:43Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-456tm_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"d001e299e996b74f090c73f4e98605ef0d323a96826d23424e724ca8e7fe466a\""
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:43Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-5b5d89c9d6-c2997_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a23c1189be7c39f22c8a59ba41b198519894c9783ecad8dcda79fb8a76f05254\""
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791205184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791356085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791396985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791577685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838312843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838494344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838510044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838727044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951016023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951141424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951152624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951369125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/066206d4c52cb784fe7c2001b5e196c6e3521560c412808e8d9ddf742aa008e4/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.020194457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.020684858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.023241167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.308816    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.023675469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.309332    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc7236a19957e321c1961c944824f2b4624bd7a289ab4ecefe33a08d4af88e2b/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:01.309389    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6fb3325d3c1005ffbbbfe7b136924ed5ff0c71db51f79a50f7179c108c238d47/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:01.309389    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/354f3c44a34fcbd9ca7826066f2b4c40b3a6551aa632389815c5d24e85e0472b/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:01.309389    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396374926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.310115    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396436126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.310432    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396447326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310624    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396626927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.467642467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.467879868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.468180469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.468559970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476573097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476618697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476631197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476702797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482324416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482501517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482648417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482918618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:48Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.545677603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.548609313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.548646013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.549168715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592129660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592185160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592195760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.310686    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592280460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311206    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.615117337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.311206    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.615393238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.311206    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.615610139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311206    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.621669759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311206    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a9f21749669fe37a2d5bee9e7f45bf84eca6ae55870de8ac18bc774db7f81643/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:01.311206    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/41035eff3b7db97e56ba25be141bf644acea4ddcb9d92ba3b421e6fa855a09af/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:01.311717    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.995795822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.311822    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.995895422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.995916522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.996021523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/86d74dec812cfd4842de7473d45ff320bfb2283d09d2d05eee29e45cbde9f5e9/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171141514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171335814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171461415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171764216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.391481057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.391826158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.391990059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.393600364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1052]: time="2024-03-18T13:10:20.550892922Z" level=info msg="ignoring event" container=46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:20.551487227Z" level=info msg="shim disconnected" id=46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264 namespace=moby
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:20.551627628Z" level=warning msg="cleaning up after shim disconnected" id=46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264 namespace=moby
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:20.551639828Z" level=info msg="cleaning up dead shim" namespace=moby
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.200900512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.202882722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.203198024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.203763327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.250783392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.252016097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.311899    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.252234698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.312419    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.252566299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.312419    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259013124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.312419    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259187125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.312591    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259204725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.312800    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259319625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:10:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/97583cc14f115cf8a4e90889b5f2beda90a81f97fd592e5e5acff8d35e305a59/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:10:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e20878b8092c291820adeb66f1b491dcef85c0699c57800cced7d3530d2a07fb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.818847676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.818997976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.819021476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.819463578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825706506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825766006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825780706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825864707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:56 multinode-894400 dockerd[1052]: 2024/03/18 13:10:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:56 multinode-894400 dockerd[1052]: 2024/03/18 13:10:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.312940    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313465    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313465    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313465    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.313605    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:01.344435    2404 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:11:01.344435    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:11:01.552318    2404 command_runner.go:130] > Name:               multinode-894400
	I0318 13:11:01.552442    2404 command_runner.go:130] > Roles:              control-plane
	I0318 13:11:01.552442    2404 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 13:11:01.552442    2404 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 13:11:01.552524    2404 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 13:11:01.552577    2404 command_runner.go:130] >                     kubernetes.io/hostname=multinode-894400
	I0318 13:11:01.552577    2404 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 13:11:01.552617    2404 command_runner.go:130] >                     minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	I0318 13:11:01.552671    2404 command_runner.go:130] >                     minikube.k8s.io/name=multinode-894400
	I0318 13:11:01.552671    2404 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0318 13:11:01.552727    2404 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_47_29_0700
	I0318 13:11:01.552727    2404 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 13:11:01.552727    2404 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0318 13:11:01.552788    2404 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0318 13:11:01.552788    2404 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 13:11:01.552788    2404 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 13:11:01.552855    2404 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 13:11:01.552855    2404 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:47:24 +0000
	I0318 13:11:01.552918    2404 command_runner.go:130] > Taints:             <none>
	I0318 13:11:01.552918    2404 command_runner.go:130] > Unschedulable:      false
	I0318 13:11:01.552918    2404 command_runner.go:130] > Lease:
	I0318 13:11:01.552981    2404 command_runner.go:130] >   HolderIdentity:  multinode-894400
	I0318 13:11:01.552981    2404 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 13:11:01.552981    2404 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 13:11:00 +0000
	I0318 13:11:01.552981    2404 command_runner.go:130] > Conditions:
	I0318 13:11:01.553043    2404 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0318 13:11:01.553094    2404 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0318 13:11:01.553231    2404 command_runner.go:130] >   MemoryPressure   False   Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0318 13:11:01.553231    2404 command_runner.go:130] >   DiskPressure     False   Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0318 13:11:01.553287    2404 command_runner.go:130] >   PIDPressure      False   Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0318 13:11:01.553338    2404 command_runner.go:130] >   Ready            True    Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 13:10:23 +0000   KubeletReady                 kubelet is posting ready status
	I0318 13:11:01.553338    2404 command_runner.go:130] > Addresses:
	I0318 13:11:01.553415    2404 command_runner.go:130] >   InternalIP:  172.30.130.156
	I0318 13:11:01.553415    2404 command_runner.go:130] >   Hostname:    multinode-894400
	I0318 13:11:01.553466    2404 command_runner.go:130] > Capacity:
	I0318 13:11:01.553466    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:01.553466    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:01.553466    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:01.553521    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:01.553571    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:01.553571    2404 command_runner.go:130] > Allocatable:
	I0318 13:11:01.553571    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:01.553627    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:01.553627    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:01.553627    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:01.553627    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:01.553677    2404 command_runner.go:130] > System Info:
	I0318 13:11:01.553677    2404 command_runner.go:130] >   Machine ID:                 80e7b822d2e94d26a09acd4a1bac452b
	I0318 13:11:01.553732    2404 command_runner.go:130] >   System UUID:                5c78c013-e4e8-1041-99c8-95cd760ef34f
	I0318 13:11:01.553732    2404 command_runner.go:130] >   Boot ID:                    a334ae39-1c10-417c-93ad-d28546d7793f
	I0318 13:11:01.553782    2404 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 13:11:01.553782    2404 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 13:11:01.553782    2404 command_runner.go:130] >   Operating System:           linux
	I0318 13:11:01.553837    2404 command_runner.go:130] >   Architecture:               amd64
	I0318 13:11:01.553837    2404 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 13:11:01.553887    2404 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 13:11:01.553887    2404 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 13:11:01.553887    2404 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0318 13:11:01.553981    2404 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0318 13:11:01.553981    2404 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0318 13:11:01.554033    2404 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 13:11:01.554091    2404 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0318 13:11:01.554091    2404 command_runner.go:130] >   default                     busybox-5b5d89c9d6-c2997                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0318 13:11:01.554144    2404 command_runner.go:130] >   kube-system                 coredns-5dd5756b68-456tm                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	I0318 13:11:01.554144    2404 command_runner.go:130] >   kube-system                 etcd-multinode-894400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         72s
	I0318 13:11:01.554203    2404 command_runner.go:130] >   kube-system                 kindnet-hhsxh                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	I0318 13:11:01.554255    2404 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-894400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	I0318 13:11:01.554315    2404 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-894400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0318 13:11:01.554390    2404 command_runner.go:130] >   kube-system                 kube-proxy-mc5tv                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0318 13:11:01.554390    2404 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-894400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0318 13:11:01.554447    2404 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0318 13:11:01.554447    2404 command_runner.go:130] > Allocated resources:
	I0318 13:11:01.554447    2404 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 13:11:01.554524    2404 command_runner.go:130] >   Resource           Requests     Limits
	I0318 13:11:01.554524    2404 command_runner.go:130] >   --------           --------     ------
	I0318 13:11:01.554524    2404 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0318 13:11:01.554584    2404 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0318 13:11:01.554584    2404 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0318 13:11:01.554584    2404 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0318 13:11:01.554584    2404 command_runner.go:130] > Events:
	I0318 13:11:01.554679    2404 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 13:11:01.554679    2404 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 13:11:01.554679    2404 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0318 13:11:01.554729    2404 command_runner.go:130] >   Normal  Starting                 70s                kube-proxy       
	I0318 13:11:01.554729    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	I0318 13:11:01.554779    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	I0318 13:11:01.554874    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	I0318 13:11:01.554932    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0318 13:11:01.554932    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m                kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	I0318 13:11:01.554993    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0318 13:11:01.555052    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m                kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	I0318 13:11:01.555104    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m                kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	I0318 13:11:01.555104    2404 command_runner.go:130] >   Normal  Starting                 23m                kubelet          Starting kubelet.
	I0318 13:11:01.555159    2404 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-894400 event: Registered Node multinode-894400 in Controller
	I0318 13:11:01.555212    2404 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-894400 status is now: NodeReady
	I0318 13:11:01.555212    2404 command_runner.go:130] >   Normal  Starting                 79s                kubelet          Starting kubelet.
	I0318 13:11:01.555287    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  78s (x8 over 79s)  kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	I0318 13:11:01.555322    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    78s (x8 over 79s)  kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	I0318 13:11:01.555363    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     78s (x7 over 79s)  kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	I0318 13:11:01.555415    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	I0318 13:11:01.555471    2404 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-894400 event: Registered Node multinode-894400 in Controller
	I0318 13:11:01.555471    2404 command_runner.go:130] > Name:               multinode-894400-m02
	I0318 13:11:01.555523    2404 command_runner.go:130] > Roles:              <none>
	I0318 13:11:01.555523    2404 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 13:11:01.555579    2404 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 13:11:01.555629    2404 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 13:11:01.555629    2404 command_runner.go:130] >                     kubernetes.io/hostname=multinode-894400-m02
	I0318 13:11:01.555691    2404 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 13:11:01.555691    2404 command_runner.go:130] >                     minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	I0318 13:11:01.555691    2404 command_runner.go:130] >                     minikube.k8s.io/name=multinode-894400
	I0318 13:11:01.555753    2404 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 13:11:01.555753    2404 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_50_35_0700
	I0318 13:11:01.555808    2404 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 13:11:01.555808    2404 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 13:11:01.555861    2404 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 13:11:01.555917    2404 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 13:11:01.555980    2404 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:50:34 +0000
	I0318 13:11:01.555980    2404 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 13:11:01.556034    2404 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 13:11:01.556034    2404 command_runner.go:130] > Unschedulable:      false
	I0318 13:11:01.556085    2404 command_runner.go:130] > Lease:
	I0318 13:11:01.556085    2404 command_runner.go:130] >   HolderIdentity:  multinode-894400-m02
	I0318 13:11:01.556085    2404 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 13:11:01.556142    2404 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 13:06:44 +0000
	I0318 13:11:01.556142    2404 command_runner.go:130] > Conditions:
	I0318 13:11:01.556195    2404 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 13:11:01.556250    2404 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 13:11:01.556250    2404 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:01.556301    2404 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:01.556357    2404 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:01.556357    2404 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:01.556408    2404 command_runner.go:130] > Addresses:
	I0318 13:11:01.556408    2404 command_runner.go:130] >   InternalIP:  172.30.140.66
	I0318 13:11:01.556464    2404 command_runner.go:130] >   Hostname:    multinode-894400-m02
	I0318 13:11:01.556464    2404 command_runner.go:130] > Capacity:
	I0318 13:11:01.556464    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:01.556515    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:01.556515    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:01.556574    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:01.556574    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:01.556574    2404 command_runner.go:130] > Allocatable:
	I0318 13:11:01.556627    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:01.556627    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:01.556627    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:01.556686    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:01.556686    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:01.556686    2404 command_runner.go:130] > System Info:
	I0318 13:11:01.556738    2404 command_runner.go:130] >   Machine ID:                 209753fe156d43e08ee40e815598ed17
	I0318 13:11:01.556738    2404 command_runner.go:130] >   System UUID:                fa19d46a-a3a2-9249-8c21-1edbfcedff01
	I0318 13:11:01.556797    2404 command_runner.go:130] >   Boot ID:                    0e15b7cf-29d6-40f7-ad78-fb04b10bea99
	I0318 13:11:01.556797    2404 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 13:11:01.556853    2404 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 13:11:01.556853    2404 command_runner.go:130] >   Operating System:           linux
	I0318 13:11:01.556853    2404 command_runner.go:130] >   Architecture:               amd64
	I0318 13:11:01.556911    2404 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 13:11:01.556911    2404 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 13:11:01.556961    2404 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 13:11:01.556961    2404 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0318 13:11:01.557018    2404 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0318 13:11:01.557018    2404 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0318 13:11:01.557070    2404 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 13:11:01.557070    2404 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0318 13:11:01.557128    2404 command_runner.go:130] >   default                     busybox-5b5d89c9d6-8btgf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0318 13:11:01.557180    2404 command_runner.go:130] >   kube-system                 kindnet-k5lpg               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	I0318 13:11:01.557180    2404 command_runner.go:130] >   kube-system                 kube-proxy-8bdmn            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0318 13:11:01.557237    2404 command_runner.go:130] > Allocated resources:
	I0318 13:11:01.557237    2404 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 13:11:01.557288    2404 command_runner.go:130] >   Resource           Requests   Limits
	I0318 13:11:01.557344    2404 command_runner.go:130] >   --------           --------   ------
	I0318 13:11:01.557344    2404 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0318 13:11:01.557397    2404 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0318 13:11:01.557397    2404 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 13:11:01.557397    2404 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 13:11:01.557397    2404 command_runner.go:130] > Events:
	I0318 13:11:01.557464    2404 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 13:11:01.557464    2404 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 13:11:01.557515    2404 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0318 13:11:01.557573    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x5 over 20m)  kubelet          Node multinode-894400-m02 status is now: NodeHasSufficientMemory
	I0318 13:11:01.557573    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x5 over 20m)  kubelet          Node multinode-894400-m02 status is now: NodeHasNoDiskPressure
	I0318 13:11:01.557625    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x5 over 20m)  kubelet          Node multinode-894400-m02 status is now: NodeHasSufficientPID
	I0318 13:11:01.557682    2404 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller
	I0318 13:11:01.557732    2404 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-894400-m02 status is now: NodeReady
	I0318 13:11:01.557732    2404 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller
	I0318 13:11:01.557788    2404 command_runner.go:130] >   Normal  NodeNotReady             20s                node-controller  Node multinode-894400-m02 status is now: NodeNotReady
	I0318 13:11:01.557839    2404 command_runner.go:130] > Name:               multinode-894400-m03
	I0318 13:11:01.557895    2404 command_runner.go:130] > Roles:              <none>
	I0318 13:11:01.557895    2404 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 13:11:01.557963    2404 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 13:11:01.557963    2404 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 13:11:01.558023    2404 command_runner.go:130] >                     kubernetes.io/hostname=multinode-894400-m03
	I0318 13:11:01.558023    2404 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 13:11:01.558077    2404 command_runner.go:130] >                     minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	I0318 13:11:01.558077    2404 command_runner.go:130] >                     minikube.k8s.io/name=multinode-894400
	I0318 13:11:01.558129    2404 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 13:11:01.558129    2404 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T13_05_26_0700
	I0318 13:11:01.558182    2404 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 13:11:01.558294    2404 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 13:11:01.558294    2404 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 13:11:01.558376    2404 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 13:11:01.558376    2404 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 13:05:25 +0000
	I0318 13:11:01.558434    2404 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 13:11:01.558434    2404 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 13:11:01.558434    2404 command_runner.go:130] > Unschedulable:      false
	I0318 13:11:01.558434    2404 command_runner.go:130] > Lease:
	I0318 13:11:01.558434    2404 command_runner.go:130] >   HolderIdentity:  multinode-894400-m03
	I0318 13:11:01.558434    2404 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 13:11:01.558434    2404 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 13:06:27 +0000
	I0318 13:11:01.558434    2404 command_runner.go:130] > Conditions:
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 13:11:01.558434    2404 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 13:11:01.558434    2404 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:01.558434    2404 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:01.558434    2404 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:01.558434    2404 command_runner.go:130] > Addresses:
	I0318 13:11:01.558434    2404 command_runner.go:130] >   InternalIP:  172.30.137.140
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Hostname:    multinode-894400-m03
	I0318 13:11:01.558434    2404 command_runner.go:130] > Capacity:
	I0318 13:11:01.558434    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:01.558434    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:01.558434    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:01.558434    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:01.558434    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:01.558434    2404 command_runner.go:130] > Allocatable:
	I0318 13:11:01.558434    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:01.558434    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:01.558434    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:01.558434    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:01.558434    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:01.558434    2404 command_runner.go:130] > System Info:
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Machine ID:                 f96e7421441b46c0a5836e2d53b26708
	I0318 13:11:01.558434    2404 command_runner.go:130] >   System UUID:                7dae14c5-92ae-d842-8ce6-c446c0352eb2
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Boot ID:                    7ef4b157-1893-48d2-9b87-d5f210c11477
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 13:11:01.558434    2404 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Operating System:           linux
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Architecture:               amd64
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 13:11:01.558434    2404 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 13:11:01.558434    2404 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0318 13:11:01.558434    2404 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0318 13:11:01.558434    2404 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0318 13:11:01.558976    2404 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 13:11:01.559069    2404 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0318 13:11:01.559069    2404 command_runner.go:130] >   kube-system                 kindnet-zv9tv       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	I0318 13:11:01.559069    2404 command_runner.go:130] >   kube-system                 kube-proxy-745w9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	I0318 13:11:01.559157    2404 command_runner.go:130] > Allocated resources:
	I0318 13:11:01.559204    2404 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Resource           Requests   Limits
	I0318 13:11:01.559204    2404 command_runner.go:130] >   --------           --------   ------
	I0318 13:11:01.559204    2404 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0318 13:11:01.559204    2404 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0318 13:11:01.559204    2404 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 13:11:01.559204    2404 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 13:11:01.559204    2404 command_runner.go:130] > Events:
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0318 13:11:01.559204    2404 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  Starting                 5m33s                  kube-proxy       
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  15m (x5 over 15m)      kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientMemory
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    15m (x5 over 15m)      kubelet          Node multinode-894400-m03 status is now: NodeHasNoDiskPressure
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     15m (x5 over 15m)      kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientPID
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-894400-m03 status is now: NodeReady
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  Starting                 5m36s                  kubelet          Starting kubelet.
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m36s (x2 over 5m36s)  kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientMemory
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m36s (x2 over 5m36s)  kubelet          Node multinode-894400-m03 status is now: NodeHasNoDiskPressure
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m36s (x2 over 5m36s)  kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientPID
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m36s                  kubelet          Updated Node Allocatable limit across pods
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  RegisteredNode           5m35s                  node-controller  Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  NodeReady                5m27s                  kubelet          Node multinode-894400-m03 status is now: NodeReady
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  NodeNotReady             3m50s                  node-controller  Node multinode-894400-m03 status is now: NodeNotReady
	I0318 13:11:01.559204    2404 command_runner.go:130] >   Normal  RegisteredNode           60s                    node-controller  Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller
	I0318 13:11:01.569484    2404 logs.go:123] Gathering logs for coredns [693a64f7472f] ...
	I0318 13:11:01.569484    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 693a64f7472f"
	I0318 13:11:01.596416    2404 command_runner.go:130] > .:53
	I0318 13:11:01.596416    2404 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fa2b120b3007aba3ef70c07d73473d14986404bfc5683cceeb89e8950d5ace894afca7d43365324975f99d1b2da3d6473c722fe82795d39a4ee8b4b969b7aa71
	I0318 13:11:01.596416    2404 command_runner.go:130] > CoreDNS-1.10.1
	I0318 13:11:01.596416    2404 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 13:11:01.596416    2404 command_runner.go:130] > [INFO] 127.0.0.1:33426 - 38858 "HINFO IN 7345450223813584863.4065419873971828575. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030234917s
	I0318 13:11:01.596622    2404 command_runner.go:130] > [INFO] 10.244.1.2:56777 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000311303s
	I0318 13:11:01.596622    2404 command_runner.go:130] > [INFO] 10.244.1.2:58024 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.098073876s
	I0318 13:11:01.596659    2404 command_runner.go:130] > [INFO] 10.244.1.2:57941 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.154978742s
	I0318 13:11:01.596659    2404 command_runner.go:130] > [INFO] 10.244.1.2:42576 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.156414777s
	I0318 13:11:01.596659    2404 command_runner.go:130] > [INFO] 10.244.0.3:43391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152802s
	I0318 13:11:01.596700    2404 command_runner.go:130] > [INFO] 10.244.0.3:52523 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000121101s
	I0318 13:11:01.596700    2404 command_runner.go:130] > [INFO] 10.244.0.3:36187 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000058401s
	I0318 13:11:01.596755    2404 command_runner.go:130] > [INFO] 10.244.0.3:33451 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000055s
	I0318 13:11:01.596755    2404 command_runner.go:130] > [INFO] 10.244.1.2:42180 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097901s
	I0318 13:11:01.596755    2404 command_runner.go:130] > [INFO] 10.244.1.2:60616 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.142731308s
	I0318 13:11:01.596807    2404 command_runner.go:130] > [INFO] 10.244.1.2:45190 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152502s
	I0318 13:11:01.596807    2404 command_runner.go:130] > [INFO] 10.244.1.2:55984 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150102s
	I0318 13:11:01.596807    2404 command_runner.go:130] > [INFO] 10.244.1.2:47725 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037970075s
	I0318 13:11:01.596807    2404 command_runner.go:130] > [INFO] 10.244.1.2:55620 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104901s
	I0318 13:11:01.596880    2404 command_runner.go:130] > [INFO] 10.244.1.2:60349 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000189802s
	I0318 13:11:01.596880    2404 command_runner.go:130] > [INFO] 10.244.1.2:44081 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089501s
	I0318 13:11:01.596880    2404 command_runner.go:130] > [INFO] 10.244.0.3:52580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182502s
	I0318 13:11:01.596923    2404 command_runner.go:130] > [INFO] 10.244.0.3:60982 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000727s
	I0318 13:11:01.596923    2404 command_runner.go:130] > [INFO] 10.244.0.3:53685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081s
	I0318 13:11:01.596963    2404 command_runner.go:130] > [INFO] 10.244.0.3:38117 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127701s
	I0318 13:11:01.596963    2404 command_runner.go:130] > [INFO] 10.244.0.3:38455 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000117101s
	I0318 13:11:01.596963    2404 command_runner.go:130] > [INFO] 10.244.0.3:50629 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121702s
	I0318 13:11:01.596963    2404 command_runner.go:130] > [INFO] 10.244.0.3:33301 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000487s
	I0318 13:11:01.597023    2404 command_runner.go:130] > [INFO] 10.244.0.3:38091 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138402s
	I0318 13:11:01.597023    2404 command_runner.go:130] > [INFO] 10.244.1.2:43364 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192902s
	I0318 13:11:01.597023    2404 command_runner.go:130] > [INFO] 10.244.1.2:42609 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060701s
	I0318 13:11:01.597067    2404 command_runner.go:130] > [INFO] 10.244.1.2:36443 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051301s
	I0318 13:11:01.597067    2404 command_runner.go:130] > [INFO] 10.244.1.2:56414 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000526s
	I0318 13:11:01.597117    2404 command_runner.go:130] > [INFO] 10.244.0.3:50774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137201s
	I0318 13:11:01.597117    2404 command_runner.go:130] > [INFO] 10.244.0.3:43237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000196902s
	I0318 13:11:01.597170    2404 command_runner.go:130] > [INFO] 10.244.0.3:38831 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059901s
	I0318 13:11:01.597170    2404 command_runner.go:130] > [INFO] 10.244.0.3:56163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122801s
	I0318 13:11:01.597218    2404 command_runner.go:130] > [INFO] 10.244.1.2:58305 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209602s
	I0318 13:11:01.597218    2404 command_runner.go:130] > [INFO] 10.244.1.2:58291 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000151202s
	I0318 13:11:01.597218    2404 command_runner.go:130] > [INFO] 10.244.1.2:33227 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184302s
	I0318 13:11:01.597254    2404 command_runner.go:130] > [INFO] 10.244.1.2:58179 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152102s
	I0318 13:11:01.597254    2404 command_runner.go:130] > [INFO] 10.244.0.3:46943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104101s
	I0318 13:11:01.597301    2404 command_runner.go:130] > [INFO] 10.244.0.3:58018 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000107001s
	I0318 13:11:01.597301    2404 command_runner.go:130] > [INFO] 10.244.0.3:35353 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119601s
	I0318 13:11:01.597339    2404 command_runner.go:130] > [INFO] 10.244.0.3:58763 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075701s
	I0318 13:11:01.597371    2404 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0318 13:11:01.597371    2404 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0318 13:11:04.113958    2404 api_server.go:253] Checking apiserver healthz at https://172.30.130.156:8443/healthz ...
	I0318 13:11:04.121944    2404 api_server.go:279] https://172.30.130.156:8443/healthz returned 200:
	ok
	I0318 13:11:04.122877    2404 round_trippers.go:463] GET https://172.30.130.156:8443/version
	I0318 13:11:04.122877    2404 round_trippers.go:469] Request Headers:
	I0318 13:11:04.122877    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:11:04.122877    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:11:04.124708    2404 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 13:11:04.124708    2404 round_trippers.go:577] Response Headers:
	I0318 13:11:04.124708    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:11:04.124708    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:11:04.124708    2404 round_trippers.go:580]     Content-Length: 264
	I0318 13:11:04.124708    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:11:04 GMT
	I0318 13:11:04.124708    2404 round_trippers.go:580]     Audit-Id: 44b1e23c-1635-4a1d-9fb9-f0a092479146
	I0318 13:11:04.125116    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:11:04.125116    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:11:04.125260    2404 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0318 13:11:04.125463    2404 api_server.go:141] control plane version: v1.28.4
	I0318 13:11:04.125463    2404 api_server.go:131] duration metric: took 3.717492s to wait for apiserver health ...
	I0318 13:11:04.125520    2404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:11:04.135764    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 13:11:04.159208    2404 command_runner.go:130] > fc4430c7fa20
	I0318 13:11:04.160008    2404 logs.go:276] 1 containers: [fc4430c7fa20]
	I0318 13:11:04.168360    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 13:11:04.196932    2404 command_runner.go:130] > 5f0887d1e691
	I0318 13:11:04.197216    2404 logs.go:276] 1 containers: [5f0887d1e691]
	I0318 13:11:04.206633    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 13:11:04.233460    2404 command_runner.go:130] > 3c3bc988c74c
	I0318 13:11:04.233460    2404 command_runner.go:130] > 693a64f7472f
	I0318 13:11:04.233848    2404 logs.go:276] 2 containers: [3c3bc988c74c 693a64f7472f]
	I0318 13:11:04.243192    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 13:11:04.266224    2404 command_runner.go:130] > 66ee8be9fada
	I0318 13:11:04.266505    2404 command_runner.go:130] > e4d42739ce0e
	I0318 13:11:04.266505    2404 logs.go:276] 2 containers: [66ee8be9fada e4d42739ce0e]
	I0318 13:11:04.274798    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 13:11:04.294660    2404 command_runner.go:130] > 163ccabc3882
	I0318 13:11:04.294660    2404 command_runner.go:130] > 9335855aab63
	I0318 13:11:04.294660    2404 logs.go:276] 2 containers: [163ccabc3882 9335855aab63]
	I0318 13:11:04.304214    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 13:11:04.330742    2404 command_runner.go:130] > 4ad6784a187d
	I0318 13:11:04.330742    2404 command_runner.go:130] > 7aa5cf4ec378
	I0318 13:11:04.330742    2404 logs.go:276] 2 containers: [4ad6784a187d 7aa5cf4ec378]
	I0318 13:11:04.340803    2404 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 13:11:04.365255    2404 command_runner.go:130] > c8e5ec25e910
	I0318 13:11:04.365255    2404 command_runner.go:130] > c4d7018ad23a
	I0318 13:11:04.365255    2404 logs.go:276] 2 containers: [c8e5ec25e910 c4d7018ad23a]
	I0318 13:11:04.365255    2404 logs.go:123] Gathering logs for kube-apiserver [fc4430c7fa20] ...
	I0318 13:11:04.365255    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc4430c7fa20"
	I0318 13:11:04.389675    2404 command_runner.go:130] ! I0318 13:09:45.117348       1 options.go:220] external host was not specified, using 172.30.130.156
	I0318 13:11:04.389675    2404 command_runner.go:130] ! I0318 13:09:45.120803       1 server.go:148] Version: v1.28.4
	I0318 13:11:04.389675    2404 command_runner.go:130] ! I0318 13:09:45.120988       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:04.389756    2404 command_runner.go:130] ! I0318 13:09:45.770080       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0318 13:11:04.389756    2404 command_runner.go:130] ! I0318 13:09:45.795010       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0318 13:11:04.389819    2404 command_runner.go:130] ! I0318 13:09:45.795318       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0318 13:11:04.389858    2404 command_runner.go:130] ! I0318 13:09:45.795878       1 instance.go:298] Using reconciler: lease
	I0318 13:11:04.389858    2404 command_runner.go:130] ! I0318 13:09:46.836486       1 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0318 13:11:04.389858    2404 command_runner.go:130] ! W0318 13:09:46.836605       1 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.389924    2404 command_runner.go:130] ! I0318 13:09:47.074638       1 handler.go:232] Adding GroupVersion  v1 to ResourceManager
	I0318 13:11:04.389924    2404 command_runner.go:130] ! I0318 13:09:47.074978       1 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0318 13:11:04.389924    2404 command_runner.go:130] ! I0318 13:09:47.452713       1 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
	I0318 13:11:04.389924    2404 command_runner.go:130] ! I0318 13:09:47.465860       1 handler.go:232] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0318 13:11:04.389924    2404 command_runner.go:130] ! W0318 13:09:47.465973       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390003    2404 command_runner.go:130] ! W0318 13:09:47.465981       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:04.390033    2404 command_runner.go:130] ! I0318 13:09:47.466706       1 handler.go:232] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0318 13:11:04.390033    2404 command_runner.go:130] ! W0318 13:09:47.466787       1 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390066    2404 command_runner.go:130] ! I0318 13:09:47.467862       1 handler.go:232] Adding GroupVersion autoscaling v2 to ResourceManager
	I0318 13:11:04.390066    2404 command_runner.go:130] ! I0318 13:09:47.468840       1 handler.go:232] Adding GroupVersion autoscaling v1 to ResourceManager
	I0318 13:11:04.390120    2404 command_runner.go:130] ! W0318 13:09:47.468926       1 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
	I0318 13:11:04.390148    2404 command_runner.go:130] ! W0318 13:09:47.468934       1 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
	I0318 13:11:04.390148    2404 command_runner.go:130] ! I0318 13:09:47.470928       1 handler.go:232] Adding GroupVersion batch v1 to ResourceManager
	I0318 13:11:04.390148    2404 command_runner.go:130] ! W0318 13:09:47.471074       1 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
	I0318 13:11:04.390148    2404 command_runner.go:130] ! I0318 13:09:47.472121       1 handler.go:232] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0318 13:11:04.390235    2404 command_runner.go:130] ! W0318 13:09:47.472195       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390235    2404 command_runner.go:130] ! W0318 13:09:47.472202       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:04.390266    2404 command_runner.go:130] ! I0318 13:09:47.472773       1 handler.go:232] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0318 13:11:04.390266    2404 command_runner.go:130] ! W0318 13:09:47.472852       1 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390322    2404 command_runner.go:130] ! W0318 13:09:47.472898       1 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390349    2404 command_runner.go:130] ! I0318 13:09:47.473727       1 handler.go:232] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0318 13:11:04.390349    2404 command_runner.go:130] ! I0318 13:09:47.476475       1 handler.go:232] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0318 13:11:04.390349    2404 command_runner.go:130] ! W0318 13:09:47.476612       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390349    2404 command_runner.go:130] ! W0318 13:09:47.476620       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:04.390433    2404 command_runner.go:130] ! I0318 13:09:47.477234       1 handler.go:232] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0318 13:11:04.390433    2404 command_runner.go:130] ! W0318 13:09:47.477314       1 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390464    2404 command_runner.go:130] ! W0318 13:09:47.477321       1 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:04.390464    2404 command_runner.go:130] ! I0318 13:09:47.478143       1 handler.go:232] Adding GroupVersion policy v1 to ResourceManager
	I0318 13:11:04.390499    2404 command_runner.go:130] ! W0318 13:09:47.478217       1 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
	I0318 13:11:04.390499    2404 command_runner.go:130] ! I0318 13:09:47.480195       1 handler.go:232] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0318 13:11:04.390526    2404 command_runner.go:130] ! W0318 13:09:47.480271       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390526    2404 command_runner.go:130] ! W0318 13:09:47.480279       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:04.390582    2404 command_runner.go:130] ! I0318 13:09:47.480731       1 handler.go:232] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0318 13:11:04.390582    2404 command_runner.go:130] ! W0318 13:09:47.480812       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390611    2404 command_runner.go:130] ! W0318 13:09:47.480819       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:04.390611    2404 command_runner.go:130] ! I0318 13:09:47.493837       1 handler.go:232] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0318 13:11:04.390642    2404 command_runner.go:130] ! W0318 13:09:47.494098       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390642    2404 command_runner.go:130] ! W0318 13:09:47.494198       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:04.390678    2404 command_runner.go:130] ! I0318 13:09:47.499689       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0318 13:11:04.390678    2404 command_runner.go:130] ! I0318 13:09:47.506631       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
	I0318 13:11:04.390705    2404 command_runner.go:130] ! W0318 13:09:47.506664       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390705    2404 command_runner.go:130] ! W0318 13:09:47.506671       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:04.390705    2404 command_runner.go:130] ! I0318 13:09:47.512288       1 handler.go:232] Adding GroupVersion apps v1 to ResourceManager
	I0318 13:11:04.390705    2404 command_runner.go:130] ! W0318 13:09:47.512371       1 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
	I0318 13:11:04.390758    2404 command_runner.go:130] ! W0318 13:09:47.512378       1 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
	I0318 13:11:04.390788    2404 command_runner.go:130] ! I0318 13:09:47.513443       1 handler.go:232] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0318 13:11:04.390819    2404 command_runner.go:130] ! W0318 13:09:47.513547       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390819    2404 command_runner.go:130] ! W0318 13:09:47.513557       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0318 13:11:04.390819    2404 command_runner.go:130] ! I0318 13:09:47.514339       1 handler.go:232] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0318 13:11:04.390855    2404 command_runner.go:130] ! W0318 13:09:47.514435       1 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390881    2404 command_runner.go:130] ! I0318 13:09:47.536002       1 handler.go:232] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0318 13:11:04.390881    2404 command_runner.go:130] ! W0318 13:09:47.536061       1 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0318 13:11:04.390881    2404 command_runner.go:130] ! I0318 13:09:48.221475       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:11:04.390881    2404 command_runner.go:130] ! I0318 13:09:48.221960       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:11:04.390985    2404 command_runner.go:130] ! I0318 13:09:48.222438       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0318 13:11:04.390985    2404 command_runner.go:130] ! I0318 13:09:48.222942       1 secure_serving.go:213] Serving securely on [::]:8443
	I0318 13:11:04.391017    2404 command_runner.go:130] ! I0318 13:09:48.223022       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:11:04.391017    2404 command_runner.go:130] ! I0318 13:09:48.223440       1 controller.go:78] Starting OpenAPI AggregationController
	I0318 13:11:04.391055    2404 command_runner.go:130] ! I0318 13:09:48.224862       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 13:11:04.391083    2404 command_runner.go:130] ! I0318 13:09:48.225271       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0318 13:11:04.391083    2404 command_runner.go:130] ! I0318 13:09:48.225417       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0318 13:11:04.391083    2404 command_runner.go:130] ! I0318 13:09:48.225564       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0318 13:11:04.391083    2404 command_runner.go:130] ! I0318 13:09:48.228940       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 13:11:04.391143    2404 command_runner.go:130] ! I0318 13:09:48.229462       1 controller.go:116] Starting legacy_token_tracking_controller
	I0318 13:11:04.391143    2404 command_runner.go:130] ! I0318 13:09:48.229644       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0318 13:11:04.391172    2404 command_runner.go:130] ! I0318 13:09:48.230522       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0318 13:11:04.391172    2404 command_runner.go:130] ! I0318 13:09:48.230832       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0318 13:11:04.391202    2404 command_runner.go:130] ! I0318 13:09:48.231097       1 aggregator.go:164] waiting for initial CRD sync...
	I0318 13:11:04.391202    2404 command_runner.go:130] ! I0318 13:09:48.231395       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0318 13:11:04.391239    2404 command_runner.go:130] ! I0318 13:09:48.231642       1 available_controller.go:423] Starting AvailableConditionController
	I0318 13:11:04.391267    2404 command_runner.go:130] ! I0318 13:09:48.231846       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0318 13:11:04.391267    2404 command_runner.go:130] ! I0318 13:09:48.232024       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0318 13:11:04.391267    2404 command_runner.go:130] ! I0318 13:09:48.232223       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0318 13:11:04.391324    2404 command_runner.go:130] ! I0318 13:09:48.232638       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0318 13:11:04.391324    2404 command_runner.go:130] ! I0318 13:09:48.233228       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:11:04.391353    2404 command_runner.go:130] ! I0318 13:09:48.233501       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:11:04.391383    2404 command_runner.go:130] ! I0318 13:09:48.242598       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0318 13:11:04.391383    2404 command_runner.go:130] ! I0318 13:09:48.242850       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0318 13:11:04.391421    2404 command_runner.go:130] ! I0318 13:09:48.243085       1 controller.go:134] Starting OpenAPI controller
	I0318 13:11:04.391421    2404 command_runner.go:130] ! I0318 13:09:48.243289       1 controller.go:85] Starting OpenAPI V3 controller
	I0318 13:11:04.391421    2404 command_runner.go:130] ! I0318 13:09:48.243558       1 naming_controller.go:291] Starting NamingConditionController
	I0318 13:11:04.391464    2404 command_runner.go:130] ! I0318 13:09:48.243852       1 establishing_controller.go:76] Starting EstablishingController
	I0318 13:11:04.391464    2404 command_runner.go:130] ! I0318 13:09:48.244899       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0318 13:11:04.391520    2404 command_runner.go:130] ! I0318 13:09:48.245178       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 13:11:04.391520    2404 command_runner.go:130] ! I0318 13:09:48.245796       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 13:11:04.391548    2404 command_runner.go:130] ! I0318 13:09:48.231958       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0318 13:11:04.391578    2404 command_runner.go:130] ! I0318 13:09:48.403749       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 13:11:04.391578    2404 command_runner.go:130] ! I0318 13:09:48.426183       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 13:11:04.391613    2404 command_runner.go:130] ! I0318 13:09:48.426213       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 13:11:04.391613    2404 command_runner.go:130] ! I0318 13:09:48.426382       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 13:11:04.391641    2404 command_runner.go:130] ! I0318 13:09:48.432175       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 13:11:04.391641    2404 command_runner.go:130] ! I0318 13:09:48.433073       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 13:11:04.391641    2404 command_runner.go:130] ! I0318 13:09:48.433297       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 13:11:04.391641    2404 command_runner.go:130] ! I0318 13:09:48.444484       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 13:11:04.391699    2404 command_runner.go:130] ! I0318 13:09:48.444708       1 aggregator.go:166] initial CRD sync complete...
	I0318 13:11:04.391699    2404 command_runner.go:130] ! I0318 13:09:48.444961       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 13:11:04.391699    2404 command_runner.go:130] ! I0318 13:09:48.445263       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 13:11:04.391699    2404 command_runner.go:130] ! I0318 13:09:48.446443       1 cache.go:39] Caches are synced for autoregister controller
	I0318 13:11:04.391772    2404 command_runner.go:130] ! I0318 13:09:48.471536       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 13:11:04.391772    2404 command_runner.go:130] ! I0318 13:09:49.257477       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 13:11:04.391819    2404 command_runner.go:130] ! W0318 13:09:49.806994       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.30.130.156]
	I0318 13:11:04.391819    2404 command_runner.go:130] ! I0318 13:09:49.809655       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 13:11:04.391819    2404 command_runner.go:130] ! I0318 13:09:49.821460       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 13:11:04.391819    2404 command_runner.go:130] ! I0318 13:09:51.622752       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 13:11:04.391897    2404 command_runner.go:130] ! I0318 13:09:51.799195       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 13:11:04.391897    2404 command_runner.go:130] ! I0318 13:09:51.812022       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 13:11:04.391953    2404 command_runner.go:130] ! I0318 13:09:51.930541       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 13:11:04.391953    2404 command_runner.go:130] ! I0318 13:09:51.942099       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 13:11:04.400075    2404 logs.go:123] Gathering logs for kube-scheduler [66ee8be9fada] ...
	I0318 13:11:04.400075    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ee8be9fada"
	I0318 13:11:04.427558    2404 command_runner.go:130] ! I0318 13:09:45.699415       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:11:04.427558    2404 command_runner.go:130] ! W0318 13:09:48.342100       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 13:11:04.427558    2404 command_runner.go:130] ! W0318 13:09:48.342243       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:04.427558    2404 command_runner.go:130] ! W0318 13:09:48.342324       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 13:11:04.427558    2404 command_runner.go:130] ! W0318 13:09:48.342374       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:11:04.427558    2404 command_runner.go:130] ! I0318 13:09:48.402495       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 13:11:04.427558    2404 command_runner.go:130] ! I0318 13:09:48.402540       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:04.427558    2404 command_runner.go:130] ! I0318 13:09:48.407228       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:11:04.427558    2404 command_runner.go:130] ! I0318 13:09:48.409117       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:11:04.427558    2404 command_runner.go:130] ! I0318 13:09:48.410197       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 13:11:04.427558    2404 command_runner.go:130] ! I0318 13:09:48.410738       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:11:04.427558    2404 command_runner.go:130] ! I0318 13:09:48.510577       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:11:04.429032    2404 logs.go:123] Gathering logs for Docker ...
	I0318 13:11:04.429032    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 13:11:04.462635    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:11:04.463174    2404 command_runner.go:130] > Mar 18 13:08:19 minikube cri-dockerd[222]: time="2024-03-18T13:08:19Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:11:04.463174    2404 command_runner.go:130] > Mar 18 13:08:19 minikube cri-dockerd[222]: time="2024-03-18T13:08:19Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:11:04.463174    2404 command_runner.go:130] > Mar 18 13:08:19 minikube cri-dockerd[222]: time="2024-03-18T13:08:19Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 13:11:04.463174    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:11:04.463174    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:11:04.463174    2404 command_runner.go:130] > Mar 18 13:08:19 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:11:04.463174    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0318 13:11:04.463299    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 13:11:04.463299    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:11:04.463299    2404 command_runner.go:130] > Mar 18 13:08:22 minikube cri-dockerd[412]: time="2024-03-18T13:08:22Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:11:04.463299    2404 command_runner.go:130] > Mar 18 13:08:22 minikube cri-dockerd[412]: time="2024-03-18T13:08:22Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:11:04.463299    2404 command_runner.go:130] > Mar 18 13:08:22 minikube cri-dockerd[412]: time="2024-03-18T13:08:22Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 13:11:04.463299    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:11:04.463423    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:11:04.463423    2404 command_runner.go:130] > Mar 18 13:08:22 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:11:04.463423    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0318 13:11:04.463499    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 13:11:04.463499    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:11:04.463499    2404 command_runner.go:130] > Mar 18 13:08:24 minikube cri-dockerd[434]: time="2024-03-18T13:08:24Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:11:04.463499    2404 command_runner.go:130] > Mar 18 13:08:24 minikube cri-dockerd[434]: time="2024-03-18T13:08:24Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:11:04.463499    2404 command_runner.go:130] > Mar 18 13:08:24 minikube cri-dockerd[434]: time="2024-03-18T13:08:24Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0318 13:11:04.463499    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:11:04.463499    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:08:24 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:08:26 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 systemd[1]: Starting Docker Application Container Engine...
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[662]: time="2024-03-18T13:09:08.926008208Z" level=info msg="Starting up"
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[662]: time="2024-03-18T13:09:08.927042019Z" level=info msg="containerd not running, starting managed containerd"
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[662]: time="2024-03-18T13:09:08.928263831Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.958180831Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.981644866Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.981729667Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.981890169Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.982007470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.982683977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.982866878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983040880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:04.463641    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983180882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.464178    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983201082Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 13:11:04.464178    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983210682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.464178    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.983772288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.464178    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.984603896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.464178    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987157222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:04.464178    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987245222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.464178    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987380024Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:04.464419    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.987459025Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 13:11:04.464419    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.988076231Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 13:11:04.464419    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.988215332Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 13:11:04.464553    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.988231932Z" level=info msg="metadata content store policy set" policy=shared
	I0318 13:11:04.464553    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994386894Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 13:11:04.464553    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994536096Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 13:11:04.464553    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994574296Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 13:11:04.464706    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994587696Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 13:11:04.464706    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994605296Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 13:11:04.464706    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.994669597Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 13:11:04.464706    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995239203Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 13:11:04.464799    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995378304Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 13:11:04.464799    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995441205Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 13:11:04.464799    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995564406Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 13:11:04.464799    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995751508Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.464799    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995819808Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.464973    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995841009Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.464973    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995857509Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.464973    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995870509Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.464973    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995903509Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.465099    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995925809Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.465099    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995942710Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.465129    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995963610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465193    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.995980410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465193    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996091811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465193    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996121511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465193    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996134612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465257    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996151212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465297    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996165012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465332    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996179412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465377    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996194912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465377    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996291913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465415    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996404914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465415    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996427114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465415    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996445915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465511    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996468515Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 13:11:04.465511    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996497915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465511    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996538416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465562    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.996560016Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 13:11:04.465562    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997036721Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 13:11:04.465610    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997287923Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 13:11:04.465610    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997398924Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 13:11:04.465610    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.997518125Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 13:11:04.465689    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.998045931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.465725    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.998612736Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 13:11:04.465755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.998643637Z" level=info msg="NRI interface is disabled by configuration."
	I0318 13:11:04.465755    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999395544Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 13:11:04.465820    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999606346Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 13:11:04.465861    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999683147Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 13:11:04.465861    2404 command_runner.go:130] > Mar 18 13:09:08 multinode-894400 dockerd[668]: time="2024-03-18T13:09:08.999765648Z" level=info msg="containerd successfully booted in 0.044672s"
	I0318 13:11:04.465861    2404 command_runner.go:130] > Mar 18 13:09:09 multinode-894400 dockerd[662]: time="2024-03-18T13:09:09.982989696Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 13:11:04.465897    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.138351976Z" level=info msg="Loading containers: start."
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.545129368Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.626119356Z" level=info msg="Loading containers: done."
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.653541890Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.654242899Z" level=info msg="Daemon has completed initialization"
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.702026381Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 systemd[1]: Started Docker Application Container Engine.
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:10 multinode-894400 dockerd[662]: time="2024-03-18T13:09:10.704980317Z" level=info msg="API listen on [::]:2376"
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 systemd[1]: Stopping Docker Application Container Engine...
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.118112316Z" level=info msg="Processing signal 'terminated'"
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120561724Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120708425Z" level=info msg="Daemon shutdown complete"
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120817525Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:35 multinode-894400 dockerd[662]: time="2024-03-18T13:09:35.120965826Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 systemd[1]: docker.service: Deactivated successfully.
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 systemd[1]: Stopped Docker Application Container Engine.
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 systemd[1]: Starting Docker Application Container Engine...
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:36.188961030Z" level=info msg="Starting up"
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:36.190214934Z" level=info msg="containerd not running, starting managed containerd"
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:36.191301438Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1058
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.220111635Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244480717Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244510717Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244539917Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244552117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244588817Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:04.465944    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244601217Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.466475    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244707818Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:04.466475    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244791318Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.466475    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244809418Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0318 13:11:04.466475    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244818018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.466475    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244838218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.466475    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.244975219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.466612    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248195830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:04.466612    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248302930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0318 13:11:04.466659    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248446530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0318 13:11:04.466699    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248548631Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0318 13:11:04.466751    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248576331Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0318 13:11:04.466751    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248593831Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0318 13:11:04.466751    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.248604331Z" level=info msg="metadata content store policy set" policy=shared
	I0318 13:11:04.466751    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.249888435Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0318 13:11:04.466829    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.249971436Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0318 13:11:04.466829    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.250624738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0318 13:11:04.466829    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.250745538Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0318 13:11:04.466829    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.250859739Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0318 13:11:04.466829    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.251093339Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0318 13:11:04.466829    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252590644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0318 13:11:04.466942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252685145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0318 13:11:04.466942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252703545Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0318 13:11:04.466942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252722945Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0318 13:11:04.466942    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252736845Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.467049    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252749745Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.467049    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252793045Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.467049    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.252998846Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.467049    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253020946Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.467049    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253065546Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.467151    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253080846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.467151    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253090746Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0318 13:11:04.467151    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253177146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467151    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253201547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467246    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253215147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467246    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253229847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467246    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253243047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467348    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253257847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467348    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253270347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467448    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253284147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467448    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253297547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467448    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253313047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467448    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253331047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467563    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253344647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467563    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253357947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467563    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253374747Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0318 13:11:04.467563    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253395147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467768    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253407847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467768    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253420947Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0318 13:11:04.467768    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253503448Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0318 13:11:04.467881    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253519848Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0318 13:11:04.467881    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253532848Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0318 13:11:04.467881    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253542748Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0318 13:11:04.467881    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253613548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0318 13:11:04.467881    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253652648Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0318 13:11:04.467881    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.253668048Z" level=info msg="NRI interface is disabled by configuration."
	I0318 13:11:04.467881    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254026949Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254474051Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254684152Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:36 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:36.254775452Z" level=info msg="containerd successfully booted in 0.035926s"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.234846559Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.265734263Z" level=info msg="Loading containers: start."
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.543045299Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.620368360Z" level=info msg="Loading containers: done."
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.642056833Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.642227734Z" level=info msg="Daemon has completed initialization"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.686175082Z" level=info msg="API listen on /var/run/docker.sock"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 systemd[1]: Started Docker Application Container Engine.
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:37 multinode-894400 dockerd[1052]: time="2024-03-18T13:09:37.687135485Z" level=info msg="API listen on [::]:2376"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Start docker client with request timeout 0s"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Loaded network plugin cni"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Docker Info: &{ID:5695bce5-a75b-48a7-87b1-d9b6b787473a Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2024-03-18T13:09:38.671342607Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00034fe30 NCPU:2 MemTotal:2216210432 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-894400 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:38Z" level=info msg="Start cri-dockerd grpc backend"
	I0318 13:11:04.468107    2404 command_runner.go:130] > Mar 18 13:09:38 multinode-894400 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0318 13:11:04.468637    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:43Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-456tm_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"d001e299e996b74f090c73f4e98605ef0d323a96826d23424e724ca8e7fe466a\""
	I0318 13:11:04.468637    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:43Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-5b5d89c9d6-c2997_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a23c1189be7c39f22c8a59ba41b198519894c9783ecad8dcda79fb8a76f05254\""
	I0318 13:11:04.468637    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791205184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.468637    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791356085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.468637    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791396985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.468637    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.791577685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.468637    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838312843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.468637    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838494344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.468817    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838510044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.468817    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.838727044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.468817    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951016023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.468817    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951141424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.468817    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951152624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469040    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:43.951369125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469083    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/066206d4c52cb784fe7c2001b5e196c6e3521560c412808e8d9ddf742aa008e4/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:04.469083    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.020194457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.469083    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.020684858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.469083    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.023241167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469159    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.023675469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469159    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc7236a19957e321c1961c944824f2b4624bd7a289ab4ecefe33a08d4af88e2b/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:04.469201    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6fb3325d3c1005ffbbbfe7b136924ed5ff0c71db51f79a50f7179c108c238d47/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:04.469201    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/354f3c44a34fcbd9ca7826066f2b4c40b3a6551aa632389815c5d24e85e0472b/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:04.469201    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396374926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.469201    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396436126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396447326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.396626927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.467642467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.467879868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.468180469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.468559970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476573097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476618697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476631197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.476702797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482324416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482501517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482648417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:44.482918618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:48Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.545677603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.548609313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.548646013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.549168715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592129660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592185160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592195760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.592280460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.469276    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.615117337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.470237    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.615393238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.470286    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.615610139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.470337    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.621669759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.470337    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a9f21749669fe37a2d5bee9e7f45bf84eca6ae55870de8ac18bc774db7f81643/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:04.470420    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/41035eff3b7db97e56ba25be141bf644acea4ddcb9d92ba3b421e6fa855a09af/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:04.470458    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.995795822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.470513    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.995895422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.470513    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.995916522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.470560    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:49.996021523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.470560    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:09:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/86d74dec812cfd4842de7473d45ff320bfb2283d09d2d05eee29e45cbde9f5e9/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:04.470617    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171141514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.470617    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171335814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.470694    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171461415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.470730    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.171764216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.470760    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.391481057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.470760    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.391826158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.470806    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.391990059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.470845    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 dockerd[1058]: time="2024-03-18T13:09:50.393600364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.470845    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1052]: time="2024-03-18T13:10:20.550892922Z" level=info msg="ignoring event" container=46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0318 13:11:04.470845    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:20.551487227Z" level=info msg="shim disconnected" id=46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264 namespace=moby
	I0318 13:11:04.470920    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:20.551627628Z" level=warning msg="cleaning up after shim disconnected" id=46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264 namespace=moby
	I0318 13:11:04.470956    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:20.551639828Z" level=info msg="cleaning up dead shim" namespace=moby
	I0318 13:11:04.470956    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.200900512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.470994    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.202882722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.203198024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:35.203763327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.250783392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.252016097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.252234698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.252566299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259013124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259187125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259204725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.259319625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:10:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/97583cc14f115cf8a4e90889b5f2beda90a81f97fd592e5e5acff8d35e305a59/resolv.conf as [nameserver 172.30.128.1]"
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 cri-dockerd[1278]: time="2024-03-18T13:10:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e20878b8092c291820adeb66f1b491dcef85c0699c57800cced7d3530d2a07fb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.818847676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.818997976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.819021476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.819463578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825706506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825766006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825780706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:53 multinode-894400 dockerd[1058]: time="2024-03-18T13:10:53.825864707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:56 multinode-894400 dockerd[1052]: 2024/03/18 13:10:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:56 multinode-894400 dockerd[1052]: 2024/03/18 13:10:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471041    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471581    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471581    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471581    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471581    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471754    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471754    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471820    2404 command_runner.go:130] > Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471820    2404 command_runner.go:130] > Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471956    2404 command_runner.go:130] > Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471956    2404 command_runner.go:130] > Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.471998    2404 command_runner.go:130] > Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.472059    2404 command_runner.go:130] > Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.472059    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.472059    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.472059    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.472059    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.472059    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.472059    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.472059    2404 command_runner.go:130] > Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.472059    2404 command_runner.go:130] > Mar 18 13:11:04 multinode-894400 dockerd[1052]: 2024/03/18 13:11:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.472059    2404 command_runner.go:130] > Mar 18 13:11:04 multinode-894400 dockerd[1052]: 2024/03/18 13:11:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0318 13:11:04.500630    2404 logs.go:123] Gathering logs for kube-proxy [9335855aab63] ...
	I0318 13:11:04.500630    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9335855aab63"
	I0318 13:11:04.529018    2404 command_runner.go:130] ! I0318 12:47:42.888603       1 server_others.go:69] "Using iptables proxy"
	I0318 13:11:04.529018    2404 command_runner.go:130] ! I0318 12:47:42.909658       1 node.go:141] Successfully retrieved node IP: 172.30.129.141
	I0318 13:11:04.529018    2404 command_runner.go:130] ! I0318 12:47:42.965774       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:11:04.529018    2404 command_runner.go:130] ! I0318 12:47:42.965824       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:11:04.529018    2404 command_runner.go:130] ! I0318 12:47:42.983172       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:11:04.529387    2404 command_runner.go:130] ! I0318 12:47:42.983221       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:11:04.529387    2404 command_runner.go:130] ! I0318 12:47:42.983471       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:11:04.529387    2404 command_runner.go:130] ! I0318 12:47:42.983484       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:04.529387    2404 command_runner.go:130] ! I0318 12:47:42.987719       1 config.go:188] "Starting service config controller"
	I0318 13:11:04.529387    2404 command_runner.go:130] ! I0318 12:47:42.987733       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:11:04.529387    2404 command_runner.go:130] ! I0318 12:47:42.987775       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:11:04.529517    2404 command_runner.go:130] ! I0318 12:47:42.987781       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:11:04.529517    2404 command_runner.go:130] ! I0318 12:47:42.988298       1 config.go:315] "Starting node config controller"
	I0318 13:11:04.529600    2404 command_runner.go:130] ! I0318 12:47:42.988306       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:11:04.529600    2404 command_runner.go:130] ! I0318 12:47:43.088485       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:11:04.529600    2404 command_runner.go:130] ! I0318 12:47:43.088594       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:11:04.529635    2404 command_runner.go:130] ! I0318 12:47:43.088517       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:11:04.532124    2404 logs.go:123] Gathering logs for kube-controller-manager [4ad6784a187d] ...
	I0318 13:11:04.532176    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ad6784a187d"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:46.053304       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:46.598188       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:46.598275       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:46.600550       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:46.600856       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:46.601228       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:46.601416       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:50.365580       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:50.380467       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:50.380609       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:50.380622       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:50.396606       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:50.396766       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:09:50.466364       1 shared_informer.go:318] Caches are synced for tokens
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:10:00.425018       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:10:00.425185       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:10:00.425608       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:10:00.425649       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 13:11:04.565867    2404 command_runner.go:130] ! I0318 13:10:00.429368       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 13:11:04.566410    2404 command_runner.go:130] ! I0318 13:10:00.429570       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 13:11:04.566410    2404 command_runner.go:130] ! I0318 13:10:00.429653       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 13:11:04.566410    2404 command_runner.go:130] ! I0318 13:10:00.432615       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.435149       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.435476       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.435957       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.436324       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.436534       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 13:11:04.566504    2404 command_runner.go:130] ! E0318 13:10:00.440226       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.440586       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! E0318 13:10:00.443615       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.443912       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.446716       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.446764       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.447388       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.450136       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.450514       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.450816       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.482128       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.482431       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.482564       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.485138       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.485477       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.485637       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.485765       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.487736       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.488836       1 gc_controller.go:101] "Starting GC controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.489018       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.490586       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.491164       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.491311       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.494562       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.495002       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.495133       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.497694       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.497986       1 expand_controller.go:328] "Starting expand controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.498025       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.500933       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 13:11:04.566504    2404 command_runner.go:130] ! I0318 13:10:00.502880       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 13:11:04.567065    2404 command_runner.go:130] ! I0318 13:10:00.503102       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 13:11:04.567272    2404 command_runner.go:130] ! I0318 13:10:00.506760       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 13:11:04.567272    2404 command_runner.go:130] ! I0318 13:10:00.507227       1 disruption.go:433] "Sending events to api server."
	I0318 13:11:04.567272    2404 command_runner.go:130] ! I0318 13:10:00.507302       1 disruption.go:444] "Starting disruption controller"
	I0318 13:11:04.567347    2404 command_runner.go:130] ! I0318 13:10:00.507366       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 13:11:04.567347    2404 command_runner.go:130] ! I0318 13:10:00.509815       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 13:11:04.567347    2404 command_runner.go:130] ! I0318 13:10:00.510402       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 13:11:04.567401    2404 command_runner.go:130] ! I0318 13:10:00.510478       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 13:11:04.567401    2404 command_runner.go:130] ! I0318 13:10:00.514582       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 13:11:04.567439    2404 command_runner.go:130] ! I0318 13:10:00.514842       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 13:11:04.567462    2404 command_runner.go:130] ! I0318 13:10:00.514832       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:04.567462    2404 command_runner.go:130] ! I0318 13:10:00.517859       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 13:11:04.567462    2404 command_runner.go:130] ! I0318 13:10:00.518134       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 13:11:04.567462    2404 command_runner.go:130] ! I0318 13:10:00.518434       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:04.567462    2404 command_runner.go:130] ! I0318 13:10:00.519400       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 13:11:04.567579    2404 command_runner.go:130] ! I0318 13:10:00.519576       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 13:11:04.567579    2404 command_runner.go:130] ! I0318 13:10:00.519729       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:04.567579    2404 command_runner.go:130] ! I0318 13:10:00.519883       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 13:11:04.567579    2404 command_runner.go:130] ! I0318 13:10:00.519902       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 13:11:04.567579    2404 command_runner.go:130] ! I0318 13:10:00.520909       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 13:11:04.567579    2404 command_runner.go:130] ! I0318 13:10:00.519914       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:04.567710    2404 command_runner.go:130] ! I0318 13:10:00.524690       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 13:11:04.567710    2404 command_runner.go:130] ! I0318 13:10:00.524967       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 13:11:04.567710    2404 command_runner.go:130] ! I0318 13:10:00.525267       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 13:11:04.567710    2404 command_runner.go:130] ! I0318 13:10:00.528248       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 13:11:04.567710    2404 command_runner.go:130] ! I0318 13:10:00.528509       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 13:11:04.567710    2404 command_runner.go:130] ! I0318 13:10:00.528721       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 13:11:04.567710    2404 command_runner.go:130] ! I0318 13:10:00.532254       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 13:11:04.567710    2404 command_runner.go:130] ! I0318 13:10:00.532687       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 13:11:04.567827    2404 command_runner.go:130] ! I0318 13:10:00.532717       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 13:11:04.567827    2404 command_runner.go:130] ! I0318 13:10:00.544900       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 13:11:04.567827    2404 command_runner.go:130] ! I0318 13:10:00.545135       1 horizontal.go:200] "Starting HPA controller"
	I0318 13:11:04.567827    2404 command_runner.go:130] ! I0318 13:10:00.545195       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 13:11:04.567827    2404 command_runner.go:130] ! I0318 13:10:00.547641       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 13:11:04.567827    2404 command_runner.go:130] ! I0318 13:10:00.548078       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 13:11:04.567827    2404 command_runner.go:130] ! I0318 13:10:00.550784       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 13:11:04.567827    2404 command_runner.go:130] ! I0318 13:10:00.551368       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.551557       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.551931       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.551452       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.553190       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.553856       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.554970       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.555558       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.555718       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.558545       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.558805       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 13:11:04.568044    2404 command_runner.go:130] ! I0318 13:10:00.558956       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 13:11:04.568206    2404 command_runner.go:130] ! W0318 13:10:00.765746       1 shared_informer.go:593] resyncPeriod 13h51m37.636447347s is smaller than resyncCheckPeriod 22h45m34.293132222s and the informer has already started. Changing it to 22h45m34.293132222s
	I0318 13:11:04.568206    2404 command_runner.go:130] ! I0318 13:10:00.765905       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 13:11:04.568206    2404 command_runner.go:130] ! I0318 13:10:00.766015       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 13:11:04.568206    2404 command_runner.go:130] ! I0318 13:10:00.766141       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 13:11:04.568206    2404 command_runner.go:130] ! I0318 13:10:00.766231       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 13:11:04.568206    2404 command_runner.go:130] ! I0318 13:10:00.767946       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 13:11:04.568206    2404 command_runner.go:130] ! I0318 13:10:00.768138       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 13:11:04.568337    2404 command_runner.go:130] ! I0318 13:10:00.768175       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 13:11:04.568337    2404 command_runner.go:130] ! I0318 13:10:00.768271       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 13:11:04.568337    2404 command_runner.go:130] ! I0318 13:10:00.768411       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 13:11:04.568337    2404 command_runner.go:130] ! I0318 13:10:00.768529       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 13:11:04.568337    2404 command_runner.go:130] ! I0318 13:10:00.768565       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 13:11:04.568337    2404 command_runner.go:130] ! I0318 13:10:00.768633       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 13:11:04.568481    2404 command_runner.go:130] ! W0318 13:10:00.768841       1 shared_informer.go:593] resyncPeriod 17h39m7.901162259s is smaller than resyncCheckPeriod 22h45m34.293132222s and the informer has already started. Changing it to 22h45m34.293132222s
	I0318 13:11:04.568481    2404 command_runner.go:130] ! I0318 13:10:00.769020       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 13:11:04.568481    2404 command_runner.go:130] ! I0318 13:10:00.769077       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 13:11:04.568481    2404 command_runner.go:130] ! I0318 13:10:00.769115       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 13:11:04.568481    2404 command_runner.go:130] ! I0318 13:10:00.769206       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 13:11:04.568481    2404 command_runner.go:130] ! I0318 13:10:00.769280       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 13:11:04.568586    2404 command_runner.go:130] ! I0318 13:10:00.769427       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 13:11:04.568586    2404 command_runner.go:130] ! I0318 13:10:00.769509       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 13:11:04.568586    2404 command_runner.go:130] ! I0318 13:10:00.769668       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 13:11:04.568586    2404 command_runner.go:130] ! I0318 13:10:00.769816       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 13:11:04.568586    2404 command_runner.go:130] ! I0318 13:10:00.769832       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:11:04.568586    2404 command_runner.go:130] ! I0318 13:10:00.769855       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 13:11:04.568586    2404 command_runner.go:130] ! I0318 13:10:00.769714       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 13:11:04.568790    2404 command_runner.go:130] ! I0318 13:10:00.906184       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 13:11:04.568790    2404 command_runner.go:130] ! I0318 13:10:00.906404       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 13:11:04.568790    2404 command_runner.go:130] ! I0318 13:10:00.906702       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:11:04.568790    2404 command_runner.go:130] ! I0318 13:10:00.906740       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 13:11:04.568853    2404 command_runner.go:130] ! I0318 13:10:00.956245       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 13:11:04.568853    2404 command_runner.go:130] ! I0318 13:10:00.956457       1 job_controller.go:226] "Starting job controller"
	I0318 13:11:04.568897    2404 command_runner.go:130] ! I0318 13:10:00.956765       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 13:11:04.568897    2404 command_runner.go:130] ! I0318 13:10:01.056144       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 13:11:04.568897    2404 command_runner.go:130] ! I0318 13:10:01.056251       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 13:11:04.568897    2404 command_runner.go:130] ! I0318 13:10:01.056576       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 13:11:04.568897    2404 command_runner.go:130] ! I0318 13:10:01.156303       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 13:11:04.569008    2404 command_runner.go:130] ! I0318 13:10:01.156762       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 13:11:04.569008    2404 command_runner.go:130] ! I0318 13:10:01.156852       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 13:11:04.569008    2404 command_runner.go:130] ! I0318 13:10:01.205282       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 13:11:04.569008    2404 command_runner.go:130] ! I0318 13:10:01.205353       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 13:11:04.569008    2404 command_runner.go:130] ! I0318 13:10:01.205368       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 13:11:04.569080    2404 command_runner.go:130] ! I0318 13:10:01.256513       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 13:11:04.569080    2404 command_runner.go:130] ! I0318 13:10:01.256828       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 13:11:04.569080    2404 command_runner.go:130] ! I0318 13:10:01.256867       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 13:11:04.569080    2404 command_runner.go:130] ! I0318 13:10:01.306581       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 13:11:04.569080    2404 command_runner.go:130] ! I0318 13:10:01.306969       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 13:11:04.569080    2404 command_runner.go:130] ! I0318 13:10:01.307156       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 13:11:04.569189    2404 command_runner.go:130] ! I0318 13:10:01.317298       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:11:04.569189    2404 command_runner.go:130] ! I0318 13:10:01.349149       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 13:11:04.569189    2404 command_runner.go:130] ! I0318 13:10:01.369957       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:11:04.569189    2404 command_runner.go:130] ! I0318 13:10:01.371629       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400\" does not exist"
	I0318 13:11:04.569189    2404 command_runner.go:130] ! I0318 13:10:01.371840       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m02\" does not exist"
	I0318 13:11:04.569189    2404 command_runner.go:130] ! I0318 13:10:01.372556       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:04.569303    2404 command_runner.go:130] ! I0318 13:10:01.372879       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m03\" does not exist"
	I0318 13:11:04.569303    2404 command_runner.go:130] ! I0318 13:10:01.373004       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:04.569303    2404 command_runner.go:130] ! I0318 13:10:01.380690       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 13:11:04.569303    2404 command_runner.go:130] ! I0318 13:10:01.383858       1 shared_informer.go:318] Caches are synced for namespace
	I0318 13:11:04.569303    2404 command_runner.go:130] ! I0318 13:10:01.390400       1 shared_informer.go:318] Caches are synced for GC
	I0318 13:11:04.569303    2404 command_runner.go:130] ! I0318 13:10:01.391669       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 13:11:04.569303    2404 command_runner.go:130] ! I0318 13:10:01.398208       1 shared_informer.go:318] Caches are synced for expand
	I0318 13:11:04.569303    2404 command_runner.go:130] ! I0318 13:10:01.403691       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 13:11:04.569435    2404 command_runner.go:130] ! I0318 13:10:01.406154       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 13:11:04.569435    2404 command_runner.go:130] ! I0318 13:10:01.407387       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 13:11:04.569435    2404 command_runner.go:130] ! I0318 13:10:01.407463       1 shared_informer.go:318] Caches are synced for disruption
	I0318 13:11:04.569435    2404 command_runner.go:130] ! I0318 13:10:01.411470       1 shared_informer.go:318] Caches are synced for deployment
	I0318 13:11:04.569435    2404 command_runner.go:130] ! I0318 13:10:01.415591       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 13:11:04.569435    2404 command_runner.go:130] ! I0318 13:10:01.419985       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 13:11:04.569435    2404 command_runner.go:130] ! I0318 13:10:01.420028       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 13:11:04.569553    2404 command_runner.go:130] ! I0318 13:10:01.422567       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 13:11:04.569553    2404 command_runner.go:130] ! I0318 13:10:01.426386       1 shared_informer.go:318] Caches are synced for node
	I0318 13:11:04.569553    2404 command_runner.go:130] ! I0318 13:10:01.426502       1 range_allocator.go:174] "Sending events to api server"
	I0318 13:11:04.569553    2404 command_runner.go:130] ! I0318 13:10:01.426637       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 13:11:04.569553    2404 command_runner.go:130] ! I0318 13:10:01.426705       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 13:11:04.569553    2404 command_runner.go:130] ! I0318 13:10:01.426892       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 13:11:04.569553    2404 command_runner.go:130] ! I0318 13:10:01.426546       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 13:11:04.569553    2404 command_runner.go:130] ! I0318 13:10:01.429986       1 shared_informer.go:318] Caches are synced for service account
	I0318 13:11:04.569677    2404 command_runner.go:130] ! I0318 13:10:01.430014       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 13:11:04.569677    2404 command_runner.go:130] ! I0318 13:10:01.433506       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 13:11:04.569677    2404 command_runner.go:130] ! I0318 13:10:01.437710       1 shared_informer.go:318] Caches are synced for TTL
	I0318 13:11:04.569677    2404 command_runner.go:130] ! I0318 13:10:01.445429       1 shared_informer.go:318] Caches are synced for HPA
	I0318 13:11:04.569677    2404 command_runner.go:130] ! I0318 13:10:01.448863       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 13:11:04.569677    2404 command_runner.go:130] ! I0318 13:10:01.451599       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 13:11:04.569677    2404 command_runner.go:130] ! I0318 13:10:01.454157       1 shared_informer.go:318] Caches are synced for taint
	I0318 13:11:04.569677    2404 command_runner.go:130] ! I0318 13:10:01.454304       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 13:11:04.569677    2404 command_runner.go:130] ! I0318 13:10:01.454496       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 13:11:04.569797    2404 command_runner.go:130] ! I0318 13:10:01.454532       1 taint_manager.go:210] "Sending events to api server"
	I0318 13:11:04.569797    2404 command_runner.go:130] ! I0318 13:10:01.455374       1 event.go:307] "Event occurred" object="multinode-894400" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400 event: Registered Node multinode-894400 in Controller"
	I0318 13:11:04.569797    2404 command_runner.go:130] ! I0318 13:10:01.455390       1 event.go:307] "Event occurred" object="multinode-894400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller"
	I0318 13:11:04.569797    2404 command_runner.go:130] ! I0318 13:10:01.455400       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller"
	I0318 13:11:04.569797    2404 command_runner.go:130] ! I0318 13:10:01.456700       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 13:11:04.569896    2404 command_runner.go:130] ! I0318 13:10:01.456719       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 13:11:04.569896    2404 command_runner.go:130] ! I0318 13:10:01.457835       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 13:11:04.569896    2404 command_runner.go:130] ! I0318 13:10:01.457861       1 shared_informer.go:318] Caches are synced for job
	I0318 13:11:04.569896    2404 command_runner.go:130] ! I0318 13:10:01.458132       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 13:11:04.569896    2404 command_runner.go:130] ! I0318 13:10:01.499926       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 13:11:04.569896    2404 command_runner.go:130] ! I0318 13:10:01.502022       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400"
	I0318 13:11:04.569896    2404 command_runner.go:130] ! I0318 13:10:01.502582       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m02"
	I0318 13:11:04.569896    2404 command_runner.go:130] ! I0318 13:10:01.502665       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m03"
	I0318 13:11:04.570013    2404 command_runner.go:130] ! I0318 13:10:01.505439       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0318 13:11:04.570013    2404 command_runner.go:130] ! I0318 13:10:01.518153       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:11:04.570013    2404 command_runner.go:130] ! I0318 13:10:01.524442       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="116.887006ms"
	I0318 13:11:04.570013    2404 command_runner.go:130] ! I0318 13:10:01.526447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="62.302µs"
	I0318 13:11:04.570013    2404 command_runner.go:130] ! I0318 13:10:01.532190       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="124.57225ms"
	I0318 13:11:04.570013    2404 command_runner.go:130] ! I0318 13:10:01.532535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.501µs"
	I0318 13:11:04.570013    2404 command_runner.go:130] ! I0318 13:10:01.536870       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 13:11:04.570114    2404 command_runner.go:130] ! I0318 13:10:01.559571       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 13:11:04.570114    2404 command_runner.go:130] ! I0318 13:10:01.576497       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:11:04.570114    2404 command_runner.go:130] ! I0318 13:10:01.970420       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:11:04.570114    2404 command_runner.go:130] ! I0318 13:10:02.008120       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:11:04.570114    2404 command_runner.go:130] ! I0318 13:10:02.008146       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 13:11:04.570114    2404 command_runner.go:130] ! I0318 13:10:23.798396       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:04.570114    2404 command_runner.go:130] ! I0318 13:10:26.538088       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-456tm" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-456tm"
	I0318 13:11:04.570217    2404 command_runner.go:130] ! I0318 13:10:26.538124       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-c2997" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-c2997"
	I0318 13:11:04.570217    2404 command_runner.go:130] ! I0318 13:10:26.538134       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0318 13:11:04.570217    2404 command_runner.go:130] ! I0318 13:10:41.556645       1 event.go:307] "Event occurred" object="multinode-894400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-894400-m02 status is now: NodeNotReady"
	I0318 13:11:04.570217    2404 command_runner.go:130] ! I0318 13:10:41.569274       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8btgf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:04.570316    2404 command_runner.go:130] ! I0318 13:10:41.592766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="22.447202ms"
	I0318 13:11:04.570316    2404 command_runner.go:130] ! I0318 13:10:41.593427       1 event.go:307] "Event occurred" object="kube-system/kindnet-k5lpg" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:04.570316    2404 command_runner.go:130] ! I0318 13:10:41.595199       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="39.101µs"
	I0318 13:11:04.570316    2404 command_runner.go:130] ! I0318 13:10:41.617007       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-8bdmn" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:04.570316    2404 command_runner.go:130] ! I0318 13:10:54.102255       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.438427ms"
	I0318 13:11:04.570316    2404 command_runner.go:130] ! I0318 13:10:54.102713       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="266.302µs"
	I0318 13:11:04.570420    2404 command_runner.go:130] ! I0318 13:10:54.115993       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="210.701µs"
	I0318 13:11:04.570420    2404 command_runner.go:130] ! I0318 13:10:55.131550       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.807636ms"
	I0318 13:11:04.570420    2404 command_runner.go:130] ! I0318 13:10:55.131763       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.301µs"
	I0318 13:11:04.584621    2404 logs.go:123] Gathering logs for dmesg ...
	I0318 13:11:04.584621    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:11:04.604641    2404 command_runner.go:130] > [Mar18 13:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0318 13:11:04.604782    2404 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0318 13:11:04.604782    2404 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0318 13:11:04.604782    2404 command_runner.go:130] > [  +0.127438] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0318 13:11:04.604855    2404 command_runner.go:130] > [  +0.022457] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0318 13:11:04.604855    2404 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0318 13:11:04.604855    2404 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0318 13:11:04.604855    2404 command_runner.go:130] > [  +0.054196] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0318 13:11:04.604937    2404 command_runner.go:130] > [  +0.018424] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0318 13:11:04.604937    2404 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0318 13:11:04.604937    2404 command_runner.go:130] > [  +4.800453] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0318 13:11:04.604937    2404 command_runner.go:130] > [  +1.267636] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0318 13:11:04.605006    2404 command_runner.go:130] > [  +1.056053] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	I0318 13:11:04.605006    2404 command_runner.go:130] > [  +6.778211] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0318 13:11:04.605006    2404 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0318 13:11:04.605006    2404 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0318 13:11:04.605006    2404 command_runner.go:130] > [Mar18 13:09] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	I0318 13:11:04.605073    2404 command_runner.go:130] > [  +0.160643] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	I0318 13:11:04.605073    2404 command_runner.go:130] > [ +25.236158] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0318 13:11:04.605073    2404 command_runner.go:130] > [  +0.093711] kauditd_printk_skb: 73 callbacks suppressed
	I0318 13:11:04.605073    2404 command_runner.go:130] > [  +0.488652] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	I0318 13:11:04.605073    2404 command_runner.go:130] > [  +0.198307] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	I0318 13:11:04.605191    2404 command_runner.go:130] > [  +0.213157] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	I0318 13:11:04.605222    2404 command_runner.go:130] > [  +2.866452] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	I0318 13:11:04.605222    2404 command_runner.go:130] > [  +0.191537] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	I0318 13:11:04.605222    2404 command_runner.go:130] > [  +0.163904] systemd-fstab-generator[1255]: Ignoring "noauto" option for root device
	I0318 13:11:04.605222    2404 command_runner.go:130] > [  +0.280650] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	I0318 13:11:04.605283    2404 command_runner.go:130] > [  +0.822319] systemd-fstab-generator[1393]: Ignoring "noauto" option for root device
	I0318 13:11:04.605283    2404 command_runner.go:130] > [  +0.094744] kauditd_printk_skb: 205 callbacks suppressed
	I0318 13:11:04.605283    2404 command_runner.go:130] > [  +3.177820] systemd-fstab-generator[1525]: Ignoring "noauto" option for root device
	I0318 13:11:04.605283    2404 command_runner.go:130] > [  +1.898187] kauditd_printk_skb: 64 callbacks suppressed
	I0318 13:11:04.605283    2404 command_runner.go:130] > [  +5.227041] kauditd_printk_skb: 10 callbacks suppressed
	I0318 13:11:04.605283    2404 command_runner.go:130] > [  +4.065141] systemd-fstab-generator[3089]: Ignoring "noauto" option for root device
	I0318 13:11:04.605283    2404 command_runner.go:130] > [Mar18 13:10] kauditd_printk_skb: 70 callbacks suppressed
	I0318 13:11:04.607197    2404 logs.go:123] Gathering logs for coredns [3c3bc988c74c] ...
	I0318 13:11:04.607197    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c3bc988c74c"
	I0318 13:11:04.642071    2404 command_runner.go:130] > .:53
	I0318 13:11:04.642071    2404 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fa2b120b3007aba3ef70c07d73473d14986404bfc5683cceeb89e8950d5ace894afca7d43365324975f99d1b2da3d6473c722fe82795d39a4ee8b4b969b7aa71
	I0318 13:11:04.642071    2404 command_runner.go:130] > CoreDNS-1.10.1
	I0318 13:11:04.642071    2404 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 13:11:04.642246    2404 command_runner.go:130] > [INFO] 127.0.0.1:47251 - 801 "HINFO IN 2968659138506762197.6766024496084331989. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051583557s
	I0318 13:11:04.642355    2404 logs.go:123] Gathering logs for coredns [693a64f7472f] ...
	I0318 13:11:04.642355    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 693a64f7472f"
	I0318 13:11:04.670277    2404 command_runner.go:130] > .:53
	I0318 13:11:04.670277    2404 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fa2b120b3007aba3ef70c07d73473d14986404bfc5683cceeb89e8950d5ace894afca7d43365324975f99d1b2da3d6473c722fe82795d39a4ee8b4b969b7aa71
	I0318 13:11:04.670277    2404 command_runner.go:130] > CoreDNS-1.10.1
	I0318 13:11:04.670277    2404 command_runner.go:130] > linux/amd64, go1.20, 055b2c3
	I0318 13:11:04.670510    2404 command_runner.go:130] > [INFO] 127.0.0.1:33426 - 38858 "HINFO IN 7345450223813584863.4065419873971828575. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030234917s
	I0318 13:11:04.670510    2404 command_runner.go:130] > [INFO] 10.244.1.2:56777 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000311303s
	I0318 13:11:04.670510    2404 command_runner.go:130] > [INFO] 10.244.1.2:58024 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.098073876s
	I0318 13:11:04.670510    2404 command_runner.go:130] > [INFO] 10.244.1.2:57941 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.154978742s
	I0318 13:11:04.670510    2404 command_runner.go:130] > [INFO] 10.244.1.2:42576 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.156414777s
	I0318 13:11:04.670510    2404 command_runner.go:130] > [INFO] 10.244.0.3:43391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152802s
	I0318 13:11:04.670510    2404 command_runner.go:130] > [INFO] 10.244.0.3:52523 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000121101s
	I0318 13:11:04.670510    2404 command_runner.go:130] > [INFO] 10.244.0.3:36187 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000058401s
	I0318 13:11:04.670510    2404 command_runner.go:130] > [INFO] 10.244.0.3:33451 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000055s
	I0318 13:11:04.670654    2404 command_runner.go:130] > [INFO] 10.244.1.2:42180 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097901s
	I0318 13:11:04.670654    2404 command_runner.go:130] > [INFO] 10.244.1.2:60616 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.142731308s
	I0318 13:11:04.670654    2404 command_runner.go:130] > [INFO] 10.244.1.2:45190 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152502s
	I0318 13:11:04.670654    2404 command_runner.go:130] > [INFO] 10.244.1.2:55984 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150102s
	I0318 13:11:04.670654    2404 command_runner.go:130] > [INFO] 10.244.1.2:47725 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037970075s
	I0318 13:11:04.670654    2404 command_runner.go:130] > [INFO] 10.244.1.2:55620 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104901s
	I0318 13:11:04.670654    2404 command_runner.go:130] > [INFO] 10.244.1.2:60349 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000189802s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.1.2:44081 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089501s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:52580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182502s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:60982 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000727s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:53685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:38117 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127701s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:38455 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000117101s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:50629 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121702s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:33301 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000487s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:38091 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138402s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.1.2:43364 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192902s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.1.2:42609 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060701s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.1.2:36443 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051301s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.1.2:56414 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000526s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:50774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137201s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:43237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000196902s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:38831 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059901s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:56163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122801s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.1.2:58305 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209602s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.1.2:58291 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000151202s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.1.2:33227 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184302s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.1.2:58179 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152102s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:46943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104101s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:58018 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000107001s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:35353 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119601s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] 10.244.0.3:58763 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075701s
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0318 13:11:04.670731    2404 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0318 13:11:04.673698    2404 logs.go:123] Gathering logs for kube-proxy [163ccabc3882] ...
	I0318 13:11:04.673698    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 163ccabc3882"
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.786718       1 server_others.go:69] "Using iptables proxy"
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.833991       1 node.go:141] Successfully retrieved node IP: 172.30.130.156
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.913665       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.913704       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.924640       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.925588       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.926722       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.926981       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.938764       1 config.go:188] "Starting service config controller"
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.949206       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.949220       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.953299       1 config.go:315] "Starting node config controller"
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.979020       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.990249       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.958488       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:50.996356       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:11:04.703136    2404 command_runner.go:130] ! I0318 13:09:51.051947       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:11:04.705804    2404 logs.go:123] Gathering logs for kindnet [c8e5ec25e910] ...
	I0318 13:11:04.705804    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8e5ec25e910"
	I0318 13:11:04.733290    2404 command_runner.go:130] ! I0318 13:09:50.858529       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0318 13:11:04.733290    2404 command_runner.go:130] ! I0318 13:09:50.859271       1 main.go:107] hostIP = 172.30.130.156
	I0318 13:11:04.733655    2404 command_runner.go:130] ! podIP = 172.30.130.156
	I0318 13:11:04.733655    2404 command_runner.go:130] ! I0318 13:09:50.860380       1 main.go:116] setting mtu 1500 for CNI 
	I0318 13:11:04.733655    2404 command_runner.go:130] ! I0318 13:09:50.930132       1 main.go:146] kindnetd IP family: "ipv4"
	I0318 13:11:04.733655    2404 command_runner.go:130] ! I0318 13:09:50.933463       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:21.283853       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:21.335833       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:21.335942       1 main.go:227] handling current node
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:21.336264       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:21.336361       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:21.336527       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.30.140.66 Flags: [] Table: 0} 
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:21.336670       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:21.336680       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:21.336727       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.30.137.140 Flags: [] Table: 0} 
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:31.343996       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:31.344324       1 main.go:227] handling current node
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:31.344341       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:31.344682       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:31.345062       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:31.345087       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:41.357494       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:41.357586       1 main.go:227] handling current node
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:41.357599       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:41.357606       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:41.357708       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:41.357932       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:51.367560       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:51.367661       1 main.go:227] handling current node
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:51.367675       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:51.367684       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:51.367956       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:10:51.368281       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:11:01.381870       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:11:01.381898       1 main.go:227] handling current node
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:11:01.381909       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:04.733725    2404 command_runner.go:130] ! I0318 13:11:01.381915       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:04.734264    2404 command_runner.go:130] ! I0318 13:11:01.382152       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:04.734264    2404 command_runner.go:130] ! I0318 13:11:01.382182       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:04.736821    2404 logs.go:123] Gathering logs for kubelet ...
	I0318 13:11:04.736821    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:11:04.768714    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 13:11:04.768714    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: I0318 13:09:39.912330    1399 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: I0318 13:09:39.913472    1399 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: I0318 13:09:39.914280    1399 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 kubelet[1399]: E0318 13:09:39.914469    1399 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:39 multinode-894400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: I0318 13:09:40.661100    1455 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: I0318 13:09:40.661586    1455 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: I0318 13:09:40.662255    1455 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 kubelet[1455]: E0318 13:09:40.662383    1455 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:40 multinode-894400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.774439    1532 server.go:467] "Kubelet version" kubeletVersion="v1.28.4"
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.775083    1532 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.775946    1532 server.go:895] "Client rotation is on, will bootstrap in background"
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.785429    1532 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.801370    1532 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.849790    1532 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0318 13:11:04.768784    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851652    1532 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0318 13:11:04.769314    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851916    1532 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0318 13:11:04.769353    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851957    1532 topology_manager.go:138] "Creating topology manager with none policy"
	I0318 13:11:04.769353    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.851967    1532 container_manager_linux.go:301] "Creating device plugin manager"
	I0318 13:11:04.769353    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.853347    1532 state_mem.go:36] "Initialized new in-memory state store"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.855331    1532 kubelet.go:393] "Attempting to sync node with API server"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.855456    1532 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.856520    1532 kubelet.go:309] "Adding apiserver pod source"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.856554    1532 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.859153    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.859647    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.860993    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.861168    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.872782    1532 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="docker" version="25.0.4" apiVersion="v1"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.875640    1532 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.876823    1532 server.go:1232] "Started kubelet"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.878282    1532 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.879215    1532 server.go:462] "Adding debug handlers to kubelet server"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.882881    1532 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.883660    1532 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.878365    1532 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.886734    1532 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"multinode-894400.17bddddee5b23bca", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-894400", UID:"multinode-894400", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"multinode-894400"}, FirstTimestamp:time.Date(2024, ti
me.March, 18, 13, 9, 42, 876797898, time.Local), LastTimestamp:time.Date(2024, time.March, 18, 13, 9, 42, 876797898, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"multinode-894400"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.30.130.156:8443: connect: connection refused'(may retry after sleeping)
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.886969    1532 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.887086    1532 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0318 13:11:04.769446    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: W0318 13:09:42.907405    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.769987    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.907883    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.770027    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: E0318 13:09:42.910785    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="200ms"
	I0318 13:11:04.770027    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.959085    1532 reconciler_new.go:29] "Reconciler: start to sync state"
	I0318 13:11:04.770027    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.981490    1532 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0318 13:11:04.770144    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.981531    1532 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0318 13:11:04.770164    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.981561    1532 state_mem.go:36] "Initialized new in-memory state store"
	I0318 13:11:04.770164    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.982644    1532 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0318 13:11:04.770225    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.982700    1532 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0318 13:11:04.770247    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.982728    1532 policy_none.go:49] "None policy: Start"
	I0318 13:11:04.770325    2404 command_runner.go:130] > Mar 18 13:09:42 multinode-894400 kubelet[1532]: I0318 13:09:42.989705    1532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0318 13:11:04.770325    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.002857    1532 memory_manager.go:169] "Starting memorymanager" policy="None"
	I0318 13:11:04.770325    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.003620    1532 state_mem.go:35] "Initializing new in-memory state store"
	I0318 13:11:04.770325    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.004623    1532 state_mem.go:75] "Updated machine memory state"
	I0318 13:11:04.770409    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.006120    1532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0318 13:11:04.770409    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.007397    1532 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0318 13:11:04.770409    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.008604    1532 kubelet.go:2303] "Starting kubelet main sync loop"
	I0318 13:11:04.770516    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.008971    1532 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0318 13:11:04.770516    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.016115    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:11:04.770516    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.018685    1532 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 13:11:04.770579    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 13:11:04.770603    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 13:11:04.770603    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 13:11:04.770603    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.021241    1532 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: W0318 13:09:43.022840    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.022916    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.022979    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.023116    1532 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.041923    1532 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-894400\" not found"
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.112352    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="400ms"
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.113553    1532 topology_manager.go:215] "Topology Admit Handler" podUID="1c745e9b917877b1ff3c90ed02e9a79a" podNamespace="kube-system" podName="kube-scheduler-multinode-894400"
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.126661    1532 topology_manager.go:215] "Topology Admit Handler" podUID="6096c2227c4230453f65f86ebdcd0d95" podNamespace="kube-system" podName="kube-apiserver-multinode-894400"
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.137838    1532 topology_manager.go:215] "Topology Admit Handler" podUID="d340aced56ba169ecac1e3ac58ad57fe" podNamespace="kube-system" podName="kube-controller-manager-multinode-894400"
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.154701    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5485f509825d9272a84959cbcfbb4f0187be886867ba7bac76fa00a35e34bdd1"
	I0318 13:11:04.770677    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.154826    1532 topology_manager.go:215] "Topology Admit Handler" podUID="743a549b698f93b8586a236f83c90556" podNamespace="kube-system" podName="etcd-multinode-894400"
	I0318 13:11:04.771201    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171660    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d001e299e996b74f090c73f4e98605ef0d323a96826d23424e724ca8e7fe466a"
	I0318 13:11:04.771201    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171681    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60e9cd749c8f67d0bc24596b26b654cf85a82055f89e14c4a14a4e9342f5fc9f"
	I0318 13:11:04.771201    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171704    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acffce2e73842c3e46177a77ddd5a8d308b51daf062cac439cc487cc863c4226"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171714    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="265b39e386cfa82eef9715aba314fbf8a9292776816cf86ed4099004698cb320"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171723    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="220884cbf1f5b852987c5a28277a4914502f0623413c284054afa92791494c50"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.171731    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a47b1fb60692cee0c4ed89ecc511fa046c0873051f7daf026f1c5c6a3dfd7352"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.172283    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82710777e700c4f2e71da911834959efc480f8ba2a526049f0f6c238947c5146"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.186382    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a23c1189be7c39f22c8a59ba41b198519894c9783ecad8dcda79fb8a76f05254"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.231617    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.233479    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.267903    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c745e9b917877b1ff3c90ed02e9a79a-kubeconfig\") pod \"kube-scheduler-multinode-894400\" (UID: \"1c745e9b917877b1ff3c90ed02e9a79a\") " pod="kube-system/kube-scheduler-multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268106    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6096c2227c4230453f65f86ebdcd0d95-ca-certs\") pod \"kube-apiserver-multinode-894400\" (UID: \"6096c2227c4230453f65f86ebdcd0d95\") " pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268214    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-ca-certs\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268242    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-kubeconfig\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268269    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268295    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/743a549b698f93b8586a236f83c90556-etcd-certs\") pod \"etcd-multinode-894400\" (UID: \"743a549b698f93b8586a236f83c90556\") " pod="kube-system/etcd-multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268330    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/743a549b698f93b8586a236f83c90556-etcd-data\") pod \"etcd-multinode-894400\" (UID: \"743a549b698f93b8586a236f83c90556\") " pod="kube-system/etcd-multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268361    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6096c2227c4230453f65f86ebdcd0d95-k8s-certs\") pod \"kube-apiserver-multinode-894400\" (UID: \"6096c2227c4230453f65f86ebdcd0d95\") " pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268423    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6096c2227c4230453f65f86ebdcd0d95-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-894400\" (UID: \"6096c2227c4230453f65f86ebdcd0d95\") " pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:11:04.771276    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268445    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-flexvolume-dir\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:11:04.771797    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.268537    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d340aced56ba169ecac1e3ac58ad57fe-k8s-certs\") pod \"kube-controller-manager-multinode-894400\" (UID: \"d340aced56ba169ecac1e3ac58ad57fe\") " pod="kube-system/kube-controller-manager-multinode-894400"
	I0318 13:11:04.771797    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.513563    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="800ms"
	I0318 13:11:04.771797    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: I0318 13:09:43.656950    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:11:04.771797    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.658595    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:11:04.771943    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: W0318 13:09:43.917173    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.771985    2404 command_runner.go:130] > Mar 18 13:09:43 multinode-894400 kubelet[1532]: E0318 13:09:43.917511    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.772016    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: W0318 13:09:44.022640    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.772016    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.022973    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.772079    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: W0318 13:09:44.114653    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.772079    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.114784    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-894400&limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: I0318 13:09:44.229821    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="354f3c44a34fcbd9ca7826066f2b4c40b3a6551aa632389815c5d24e85e0472b"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.315351    1532 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-894400?timeout=10s\": dial tcp 172.30.130.156:8443: connect: connection refused" interval="1.6s"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: W0318 13:09:44.368370    1532 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.368575    1532 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.30.130.156:8443: connect: connection refused
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: I0318 13:09:44.495686    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:44 multinode-894400 kubelet[1532]: E0318 13:09:44.496847    1532 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.30.130.156:8443: connect: connection refused" node="multinode-894400"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:46 multinode-894400 kubelet[1532]: I0318 13:09:46.112867    1532 kubelet_node_status.go:70] "Attempting to register node" node="multinode-894400"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.454296    1532 kubelet_node_status.go:108] "Node was previously registered" node="multinode-894400"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.454504    1532 kubelet_node_status.go:73] "Successfully registered node" node="multinode-894400"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.466215    1532 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.467399    1532 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.481710    1532 setters.go:552] "Node became not ready" node="multinode-894400" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-18T13:09:48Z","lastTransitionTime":"2024-03-18T13:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.865400    1532 apiserver.go:52] "Watching apiserver"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872433    1532 topology_manager.go:215] "Topology Admit Handler" podUID="0afe25f8-cbd6-412b-8698-7b547d1d49ca" podNamespace="kube-system" podName="kube-proxy-mc5tv"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872584    1532 topology_manager.go:215] "Topology Admit Handler" podUID="0161d239-2d85-4246-b2fa-6c7374f2ecd6" podNamespace="kube-system" podName="kindnet-hhsxh"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872794    1532 topology_manager.go:215] "Topology Admit Handler" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67" podNamespace="kube-system" podName="coredns-5dd5756b68-456tm"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872862    1532 topology_manager.go:215] "Topology Admit Handler" podUID="219bafbc-d807-44cf-9927-e4957f36ad70" podNamespace="kube-system" podName="storage-provisioner"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.872944    1532 topology_manager.go:215] "Topology Admit Handler" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f" podNamespace="default" podName="busybox-5b5d89c9d6-c2997"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.873248    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.873593    1532 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-894400" podUID="62aca0ea-36b0-4841-9616-61448f45e04a"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.873861    1532 kubelet.go:1872] "Trying to delete pod" pod="kube-system/etcd-multinode-894400" podUID="672a85d9-7526-4870-a33a-eac509ef3c3f"
	I0318 13:11:04.772155    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.876751    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.772686    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.889248    1532 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0318 13:11:04.772686    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.964782    1532 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-894400"
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.965861    1532 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-894400"
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966709    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0afe25f8-cbd6-412b-8698-7b547d1d49ca-lib-modules\") pod \"kube-proxy-mc5tv\" (UID: \"0afe25f8-cbd6-412b-8698-7b547d1d49ca\") " pod="kube-system/kube-proxy-mc5tv"
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966761    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/219bafbc-d807-44cf-9927-e4957f36ad70-tmp\") pod \"storage-provisioner\" (UID: \"219bafbc-d807-44cf-9927-e4957f36ad70\") " pod="kube-system/storage-provisioner"
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966802    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0161d239-2d85-4246-b2fa-6c7374f2ecd6-cni-cfg\") pod \"kindnet-hhsxh\" (UID: \"0161d239-2d85-4246-b2fa-6c7374f2ecd6\") " pod="kube-system/kindnet-hhsxh"
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966847    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0afe25f8-cbd6-412b-8698-7b547d1d49ca-xtables-lock\") pod \"kube-proxy-mc5tv\" (UID: \"0afe25f8-cbd6-412b-8698-7b547d1d49ca\") " pod="kube-system/kube-proxy-mc5tv"
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966908    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0161d239-2d85-4246-b2fa-6c7374f2ecd6-xtables-lock\") pod \"kindnet-hhsxh\" (UID: \"0161d239-2d85-4246-b2fa-6c7374f2ecd6\") " pod="kube-system/kindnet-hhsxh"
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: I0318 13:09:48.966943    1532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0161d239-2d85-4246-b2fa-6c7374f2ecd6-lib-modules\") pod \"kindnet-hhsxh\" (UID: \"0161d239-2d85-4246-b2fa-6c7374f2ecd6\") " pod="kube-system/kindnet-hhsxh"
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.968339    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:48 multinode-894400 kubelet[1532]: E0318 13:09:48.968477    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:49.468437755 +0000 UTC m=+6.779274091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.000742    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.000961    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.772813    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.001575    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:49.501554367 +0000 UTC m=+6.812390603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.773338    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.048369    1532 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c396fd459c503d2e9464c73cc841d3d8" path="/var/lib/kubelet/pods/c396fd459c503d2e9464c73cc841d3d8/volumes"
	I0318 13:11:04.773338    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.051334    1532 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="decc1d942b4d81359bb79c0349ffe9bb" path="/var/lib/kubelet/pods/decc1d942b4d81359bb79c0349ffe9bb/volumes"
	I0318 13:11:04.773338    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.248524    1532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-894400" podStartSLOduration=0.2483832 podCreationTimestamp="2024-03-18 13:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 13:09:49.21292898 +0000 UTC m=+6.523765316" watchObservedRunningTime="2024-03-18 13:09:49.2483832 +0000 UTC m=+6.559219436"
	I0318 13:11:04.773468    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.285710    1532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-894400" podStartSLOduration=0.285684326 podCreationTimestamp="2024-03-18 13:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-18 13:09:49.252285313 +0000 UTC m=+6.563121649" watchObservedRunningTime="2024-03-18 13:09:49.285684326 +0000 UTC m=+6.596520662"
	I0318 13:11:04.773505    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.471617    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.472236    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:50.471713653 +0000 UTC m=+7.782549889 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.573240    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.573347    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: E0318 13:09:49.573459    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:50.573441997 +0000 UTC m=+7.884278233 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:49 multinode-894400 kubelet[1532]: I0318 13:09:49.813611    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41035eff3b7db97e56ba25be141bf644acea4ddcb9d92ba3b421e6fa855a09af"
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: I0318 13:09:50.142572    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86d74dec812cfd4842de7473d45ff320bfb2283d09d2d05eee29e45cbde9f5e9"
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: I0318 13:09:50.219092    1532 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9f21749669fe37a2d5bee9e7f45bf84eca6ae55870de8ac18bc774db7f81643"
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.481085    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.481271    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:52.48125246 +0000 UTC m=+9.792088696 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.581790    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.581835    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:50 multinode-894400 kubelet[1532]: E0318 13:09:50.581885    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:52.5818703 +0000 UTC m=+9.892706536 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:51 multinode-894400 kubelet[1532]: E0318 13:09:51.011273    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:51 multinode-894400 kubelet[1532]: E0318 13:09:51.012015    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.499973    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:04.773564    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.500149    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:09:56.500131973 +0000 UTC m=+13.810968209 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:04.774087    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.601982    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.774087    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.602006    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.774087    2404 command_runner.go:130] > Mar 18 13:09:52 multinode-894400 kubelet[1532]: E0318 13:09:52.602087    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:09:56.602073317 +0000 UTC m=+13.912909553 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.774198    2404 command_runner.go:130] > Mar 18 13:09:53 multinode-894400 kubelet[1532]: E0318 13:09:53.009672    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774231    2404 command_runner.go:130] > Mar 18 13:09:53 multinode-894400 kubelet[1532]: E0318 13:09:53.010317    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:55 multinode-894400 kubelet[1532]: E0318 13:09:55.010917    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:55 multinode-894400 kubelet[1532]: E0318 13:09:55.011786    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.539408    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.539534    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:10:04.539515204 +0000 UTC m=+21.850351440 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.639919    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.639948    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:56 multinode-894400 kubelet[1532]: E0318 13:09:56.639998    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:10:04.639981843 +0000 UTC m=+21.950818079 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:57 multinode-894400 kubelet[1532]: E0318 13:09:57.009521    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:57 multinode-894400 kubelet[1532]: E0318 13:09:57.010257    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:59 multinode-894400 kubelet[1532]: E0318 13:09:59.011021    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:09:59 multinode-894400 kubelet[1532]: E0318 13:09:59.011361    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:10:01 multinode-894400 kubelet[1532]: E0318 13:10:01.009167    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:10:01 multinode-894400 kubelet[1532]: E0318 13:10:01.009678    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:10:03 multinode-894400 kubelet[1532]: E0318 13:10:03.010168    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:10:03 multinode-894400 kubelet[1532]: E0318 13:10:03.011736    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.603257    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:04.774291    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.603387    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:10:20.60337037 +0000 UTC m=+37.914206606 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:04.774825    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.704132    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.774825    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.704169    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:04 multinode-894400 kubelet[1532]: E0318 13:10:04.704219    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:10:20.704204798 +0000 UTC m=+38.015041034 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:05 multinode-894400 kubelet[1532]: E0318 13:10:05.009461    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:05 multinode-894400 kubelet[1532]: E0318 13:10:05.010204    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:07 multinode-894400 kubelet[1532]: E0318 13:10:07.009925    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:07 multinode-894400 kubelet[1532]: E0318 13:10:07.010942    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:09 multinode-894400 kubelet[1532]: E0318 13:10:09.010506    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:09 multinode-894400 kubelet[1532]: E0318 13:10:09.011883    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:11 multinode-894400 kubelet[1532]: E0318 13:10:11.009145    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:11 multinode-894400 kubelet[1532]: E0318 13:10:11.011730    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:13 multinode-894400 kubelet[1532]: E0318 13:10:13.010103    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:13 multinode-894400 kubelet[1532]: E0318 13:10:13.010921    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.774934    2404 command_runner.go:130] > Mar 18 13:10:15 multinode-894400 kubelet[1532]: E0318 13:10:15.009361    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.775462    2404 command_runner.go:130] > Mar 18 13:10:15 multinode-894400 kubelet[1532]: E0318 13:10:15.010565    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.775462    2404 command_runner.go:130] > Mar 18 13:10:17 multinode-894400 kubelet[1532]: E0318 13:10:17.009688    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:17 multinode-894400 kubelet[1532]: E0318 13:10:17.010200    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:19 multinode-894400 kubelet[1532]: E0318 13:10:19.010187    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:19 multinode-894400 kubelet[1532]: E0318 13:10:19.010730    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.639546    1532 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.639747    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume podName:1a018c55-846b-4dc2-992c-dc8fd82a6c67 nodeName:}" failed. No retries permitted until 2024-03-18 13:10:52.639723825 +0000 UTC m=+69.950560161 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a018c55-846b-4dc2-992c-dc8fd82a6c67-config-volume") pod "coredns-5dd5756b68-456tm" (UID: "1a018c55-846b-4dc2-992c-dc8fd82a6c67") : object "kube-system"/"coredns" not registered
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.740353    1532 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.740517    1532 projected.go:198] Error preparing data for projected volume kube-api-access-jwqqg for pod default/busybox-5b5d89c9d6-c2997: object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:20 multinode-894400 kubelet[1532]: E0318 13:10:20.740585    1532 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg podName:171cbfa4-4415-4169-b25d-ff5905fd513f nodeName:}" failed. No retries permitted until 2024-03-18 13:10:52.740566824 +0000 UTC m=+70.051403160 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-jwqqg" (UniqueName: "kubernetes.io/projected/171cbfa4-4415-4169-b25d-ff5905fd513f-kube-api-access-jwqqg") pod "busybox-5b5d89c9d6-c2997" (UID: "171cbfa4-4415-4169-b25d-ff5905fd513f") : object "default"/"kube-root-ca.crt" not registered
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: E0318 13:10:21.010015    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: E0318 13:10:21.011108    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: I0318 13:10:21.647969    1532 scope.go:117] "RemoveContainer" containerID="a2c499223090cc38a7b425469621fb6c8dbc443ab7eb0d5841f1fdcea2922366"
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: I0318 13:10:21.651387    1532 scope.go:117] "RemoveContainer" containerID="46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264"
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:21 multinode-894400 kubelet[1532]: E0318 13:10:21.652104    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(219bafbc-d807-44cf-9927-e4957f36ad70)\"" pod="kube-system/storage-provisioner" podUID="219bafbc-d807-44cf-9927-e4957f36ad70"
	I0318 13:11:04.775562    2404 command_runner.go:130] > Mar 18 13:10:23 multinode-894400 kubelet[1532]: E0318 13:10:23.010116    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-456tm" podUID="1a018c55-846b-4dc2-992c-dc8fd82a6c67"
	I0318 13:11:04.776085    2404 command_runner.go:130] > Mar 18 13:10:23 multinode-894400 kubelet[1532]: E0318 13:10:23.010816    1532 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5b5d89c9d6-c2997" podUID="171cbfa4-4415-4169-b25d-ff5905fd513f"
	I0318 13:11:04.776085    2404 command_runner.go:130] > Mar 18 13:10:23 multinode-894400 kubelet[1532]: I0318 13:10:23.777913    1532 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	I0318 13:11:04.776085    2404 command_runner.go:130] > Mar 18 13:10:35 multinode-894400 kubelet[1532]: I0318 13:10:35.009532    1532 scope.go:117] "RemoveContainer" containerID="46c0cf90d385f0a29de6e795bc03f2d1837ea1819ef2389992a26aaf77e63264"
	I0318 13:11:04.776085    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]: I0318 13:10:43.012571    1532 scope.go:117] "RemoveContainer" containerID="56d1819beb10ed198593d8a369f601faf82bf81ff1aecdbffe7114cd1265351b"
	I0318 13:11:04.776085    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]: E0318 13:10:43.030354    1532 iptables.go:575] "Could not set up iptables canary" err=<
	I0318 13:11:04.776085    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0318 13:11:04.776085    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0318 13:11:04.776207    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0318 13:11:04.776207    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0318 13:11:04.776207    2404 command_runner.go:130] > Mar 18 13:10:43 multinode-894400 kubelet[1532]: I0318 13:10:43.056417    1532 scope.go:117] "RemoveContainer" containerID="c51f768a2f642fdffc6de67f101be5abd8bbaec83ef13011b47efab5aad27134"
	I0318 13:11:04.814359    2404 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:11:04.814359    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 13:11:05.027394    2404 command_runner.go:130] > Name:               multinode-894400
	I0318 13:11:05.028373    2404 command_runner.go:130] > Roles:              control-plane
	I0318 13:11:05.028373    2404 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 13:11:05.028373    2404 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 13:11:05.028373    2404 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 13:11:05.028373    2404 command_runner.go:130] >                     kubernetes.io/hostname=multinode-894400
	I0318 13:11:05.028373    2404 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 13:11:05.028373    2404 command_runner.go:130] >                     minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	I0318 13:11:05.028373    2404 command_runner.go:130] >                     minikube.k8s.io/name=multinode-894400
	I0318 13:11:05.028373    2404 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0318 13:11:05.028449    2404 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_47_29_0700
	I0318 13:11:05.028449    2404 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 13:11:05.028449    2404 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0318 13:11:05.028489    2404 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0318 13:11:05.028489    2404 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 13:11:05.028489    2404 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 13:11:05.028489    2404 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 13:11:05.028489    2404 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:47:24 +0000
	I0318 13:11:05.028489    2404 command_runner.go:130] > Taints:             <none>
	I0318 13:11:05.028489    2404 command_runner.go:130] > Unschedulable:      false
	I0318 13:11:05.028489    2404 command_runner.go:130] > Lease:
	I0318 13:11:05.028489    2404 command_runner.go:130] >   HolderIdentity:  multinode-894400
	I0318 13:11:05.028489    2404 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 13:11:05.028489    2404 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 13:11:00 +0000
	I0318 13:11:05.028489    2404 command_runner.go:130] > Conditions:
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0318 13:11:05.028489    2404 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0318 13:11:05.028489    2404 command_runner.go:130] >   MemoryPressure   False   Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0318 13:11:05.028489    2404 command_runner.go:130] >   DiskPressure     False   Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0318 13:11:05.028489    2404 command_runner.go:130] >   PIDPressure      False   Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Ready            True    Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 13:10:23 +0000   KubeletReady                 kubelet is posting ready status
	I0318 13:11:05.028489    2404 command_runner.go:130] > Addresses:
	I0318 13:11:05.028489    2404 command_runner.go:130] >   InternalIP:  172.30.130.156
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Hostname:    multinode-894400
	I0318 13:11:05.028489    2404 command_runner.go:130] > Capacity:
	I0318 13:11:05.028489    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:05.028489    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:05.028489    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:05.028489    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:05.028489    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:05.028489    2404 command_runner.go:130] > Allocatable:
	I0318 13:11:05.028489    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:05.028489    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:05.028489    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:05.028489    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:05.028489    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:05.028489    2404 command_runner.go:130] > System Info:
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Machine ID:                 80e7b822d2e94d26a09acd4a1bac452b
	I0318 13:11:05.028489    2404 command_runner.go:130] >   System UUID:                5c78c013-e4e8-1041-99c8-95cd760ef34f
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Boot ID:                    a334ae39-1c10-417c-93ad-d28546d7793f
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 13:11:05.028489    2404 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Operating System:           linux
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Architecture:               amd64
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 13:11:05.028489    2404 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 13:11:05.028489    2404 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0318 13:11:05.029081    2404 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0318 13:11:05.029081    2404 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0318 13:11:05.029081    2404 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 13:11:05.029081    2404 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0318 13:11:05.029081    2404 command_runner.go:130] >   default                     busybox-5b5d89c9d6-c2997                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0318 13:11:05.029081    2404 command_runner.go:130] >   kube-system                 coredns-5dd5756b68-456tm                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	I0318 13:11:05.029181    2404 command_runner.go:130] >   kube-system                 etcd-multinode-894400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         75s
	I0318 13:11:05.029181    2404 command_runner.go:130] >   kube-system                 kindnet-hhsxh                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	I0318 13:11:05.029181    2404 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-894400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	I0318 13:11:05.029181    2404 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-894400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0318 13:11:05.029181    2404 command_runner.go:130] >   kube-system                 kube-proxy-mc5tv                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0318 13:11:05.029263    2404 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-894400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0318 13:11:05.029263    2404 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	I0318 13:11:05.029263    2404 command_runner.go:130] > Allocated resources:
	I0318 13:11:05.029263    2404 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 13:11:05.029263    2404 command_runner.go:130] >   Resource           Requests     Limits
	I0318 13:11:05.029348    2404 command_runner.go:130] >   --------           --------     ------
	I0318 13:11:05.029348    2404 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0318 13:11:05.029348    2404 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0318 13:11:05.029348    2404 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0318 13:11:05.029348    2404 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0318 13:11:05.029348    2404 command_runner.go:130] > Events:
	I0318 13:11:05.029348    2404 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 13:11:05.029348    2404 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 13:11:05.029348    2404 command_runner.go:130] >   Normal  Starting                 23m                kube-proxy       
	I0318 13:11:05.029423    2404 command_runner.go:130] >   Normal  Starting                 74s                kube-proxy       
	I0318 13:11:05.029423    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	I0318 13:11:05.029423    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	I0318 13:11:05.029423    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	I0318 13:11:05.029491    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0318 13:11:05.029491    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  23m                kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	I0318 13:11:05.029491    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0318 13:11:05.029491    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    23m                kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	I0318 13:11:05.029558    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     23m                kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	I0318 13:11:05.029558    2404 command_runner.go:130] >   Normal  Starting                 23m                kubelet          Starting kubelet.
	I0318 13:11:05.029558    2404 command_runner.go:130] >   Normal  RegisteredNode           23m                node-controller  Node multinode-894400 event: Registered Node multinode-894400 in Controller
	I0318 13:11:05.029558    2404 command_runner.go:130] >   Normal  NodeReady                23m                kubelet          Node multinode-894400 status is now: NodeReady
	I0318 13:11:05.029558    2404 command_runner.go:130] >   Normal  Starting                 82s                kubelet          Starting kubelet.
	I0318 13:11:05.029625    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  81s (x8 over 82s)  kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	I0318 13:11:05.029625    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    81s (x8 over 82s)  kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	I0318 13:11:05.029625    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     81s (x7 over 82s)  kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	I0318 13:11:05.029625    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	I0318 13:11:05.029709    2404 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-894400 event: Registered Node multinode-894400 in Controller
	I0318 13:11:05.029832    2404 command_runner.go:130] > Name:               multinode-894400-m02
	I0318 13:11:05.029832    2404 command_runner.go:130] > Roles:              <none>
	I0318 13:11:05.029832    2404 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 13:11:05.029858    2404 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 13:11:05.029858    2404 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 13:11:05.029888    2404 command_runner.go:130] >                     kubernetes.io/hostname=multinode-894400-m02
	I0318 13:11:05.029888    2404 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 13:11:05.029888    2404 command_runner.go:130] >                     minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	I0318 13:11:05.029888    2404 command_runner.go:130] >                     minikube.k8s.io/name=multinode-894400
	I0318 13:11:05.029888    2404 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 13:11:05.029888    2404 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T12_50_35_0700
	I0318 13:11:05.029888    2404 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 13:11:05.029888    2404 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 13:11:05.029985    2404 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 13:11:05.029985    2404 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 13:11:05.029985    2404 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 12:50:34 +0000
	I0318 13:11:05.029985    2404 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 13:11:05.030096    2404 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 13:11:05.030096    2404 command_runner.go:130] > Unschedulable:      false
	I0318 13:11:05.030096    2404 command_runner.go:130] > Lease:
	I0318 13:11:05.030096    2404 command_runner.go:130] >   HolderIdentity:  multinode-894400-m02
	I0318 13:11:05.030096    2404 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 13:11:05.030096    2404 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 13:06:44 +0000
	I0318 13:11:05.030096    2404 command_runner.go:130] > Conditions:
	I0318 13:11:05.030096    2404 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 13:11:05.030096    2404 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 13:11:05.030096    2404 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:05.030218    2404 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:05.030218    2404 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:05.030218    2404 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 13:01:49 +0000   Mon, 18 Mar 2024 13:10:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:05.030218    2404 command_runner.go:130] > Addresses:
	I0318 13:11:05.030218    2404 command_runner.go:130] >   InternalIP:  172.30.140.66
	I0318 13:11:05.030218    2404 command_runner.go:130] >   Hostname:    multinode-894400-m02
	I0318 13:11:05.030218    2404 command_runner.go:130] > Capacity:
	I0318 13:11:05.030218    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:05.030218    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:05.030218    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:05.030218    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:05.030218    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:05.030346    2404 command_runner.go:130] > Allocatable:
	I0318 13:11:05.030346    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:05.030346    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:05.030346    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:05.030346    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:05.030396    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:05.030396    2404 command_runner.go:130] > System Info:
	I0318 13:11:05.030396    2404 command_runner.go:130] >   Machine ID:                 209753fe156d43e08ee40e815598ed17
	I0318 13:11:05.030396    2404 command_runner.go:130] >   System UUID:                fa19d46a-a3a2-9249-8c21-1edbfcedff01
	I0318 13:11:05.030396    2404 command_runner.go:130] >   Boot ID:                    0e15b7cf-29d6-40f7-ad78-fb04b10bea99
	I0318 13:11:05.030396    2404 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 13:11:05.030396    2404 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 13:11:05.030396    2404 command_runner.go:130] >   Operating System:           linux
	I0318 13:11:05.030396    2404 command_runner.go:130] >   Architecture:               amd64
	I0318 13:11:05.030396    2404 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 13:11:05.030396    2404 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 13:11:05.030495    2404 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 13:11:05.030495    2404 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0318 13:11:05.030495    2404 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0318 13:11:05.030495    2404 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0318 13:11:05.030495    2404 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 13:11:05.030495    2404 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0318 13:11:05.030495    2404 command_runner.go:130] >   default                     busybox-5b5d89c9d6-8btgf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0318 13:11:05.030495    2404 command_runner.go:130] >   kube-system                 kindnet-k5lpg               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	I0318 13:11:05.030591    2404 command_runner.go:130] >   kube-system                 kube-proxy-8bdmn            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0318 13:11:05.030591    2404 command_runner.go:130] > Allocated resources:
	I0318 13:11:05.030591    2404 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 13:11:05.030591    2404 command_runner.go:130] >   Resource           Requests   Limits
	I0318 13:11:05.030591    2404 command_runner.go:130] >   --------           --------   ------
	I0318 13:11:05.030591    2404 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0318 13:11:05.030591    2404 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0318 13:11:05.030591    2404 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 13:11:05.030679    2404 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 13:11:05.030679    2404 command_runner.go:130] > Events:
	I0318 13:11:05.030679    2404 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0318 13:11:05.030679    2404 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0318 13:11:05.030679    2404 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0318 13:11:05.030765    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x5 over 20m)  kubelet          Node multinode-894400-m02 status is now: NodeHasSufficientMemory
	I0318 13:11:05.030765    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x5 over 20m)  kubelet          Node multinode-894400-m02 status is now: NodeHasNoDiskPressure
	I0318 13:11:05.030765    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x5 over 20m)  kubelet          Node multinode-894400-m02 status is now: NodeHasSufficientPID
	I0318 13:11:05.030765    2404 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller
	I0318 13:11:05.030765    2404 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-894400-m02 status is now: NodeReady
	I0318 13:11:05.030838    2404 command_runner.go:130] >   Normal  RegisteredNode           63s                node-controller  Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller
	I0318 13:11:05.030838    2404 command_runner.go:130] >   Normal  NodeNotReady             23s                node-controller  Node multinode-894400-m02 status is now: NodeNotReady
	I0318 13:11:05.030838    2404 command_runner.go:130] > Name:               multinode-894400-m03
	I0318 13:11:05.030838    2404 command_runner.go:130] > Roles:              <none>
	I0318 13:11:05.030838    2404 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0318 13:11:05.030838    2404 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0318 13:11:05.030838    2404 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0318 13:11:05.030838    2404 command_runner.go:130] >                     kubernetes.io/hostname=multinode-894400-m03
	I0318 13:11:05.030937    2404 command_runner.go:130] >                     kubernetes.io/os=linux
	I0318 13:11:05.030937    2404 command_runner.go:130] >                     minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	I0318 13:11:05.030937    2404 command_runner.go:130] >                     minikube.k8s.io/name=multinode-894400
	I0318 13:11:05.030937    2404 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0318 13:11:05.030937    2404 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_18T13_05_26_0700
	I0318 13:11:05.030937    2404 command_runner.go:130] >                     minikube.k8s.io/version=v1.32.0
	I0318 13:11:05.031048    2404 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0318 13:11:05.031048    2404 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0318 13:11:05.031048    2404 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0318 13:11:05.031048    2404 command_runner.go:130] > CreationTimestamp:  Mon, 18 Mar 2024 13:05:25 +0000
	I0318 13:11:05.031048    2404 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0318 13:11:05.031048    2404 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0318 13:11:05.031048    2404 command_runner.go:130] > Unschedulable:      false
	I0318 13:11:05.031048    2404 command_runner.go:130] > Lease:
	I0318 13:11:05.031048    2404 command_runner.go:130] >   HolderIdentity:  multinode-894400-m03
	I0318 13:11:05.031048    2404 command_runner.go:130] >   AcquireTime:     <unset>
	I0318 13:11:05.031048    2404 command_runner.go:130] >   RenewTime:       Mon, 18 Mar 2024 13:06:27 +0000
	I0318 13:11:05.031048    2404 command_runner.go:130] > Conditions:
	I0318 13:11:05.031048    2404 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0318 13:11:05.031048    2404 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0318 13:11:05.031048    2404 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:05.031048    2404 command_runner.go:130] >   DiskPressure     Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:05.031048    2404 command_runner.go:130] >   PIDPressure      Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:05.031048    2404 command_runner.go:130] >   Ready            Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0318 13:11:05.031048    2404 command_runner.go:130] > Addresses:
	I0318 13:11:05.031048    2404 command_runner.go:130] >   InternalIP:  172.30.137.140
	I0318 13:11:05.031299    2404 command_runner.go:130] >   Hostname:    multinode-894400-m03
	I0318 13:11:05.031299    2404 command_runner.go:130] > Capacity:
	I0318 13:11:05.031299    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:05.031299    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:05.031299    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:05.031299    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:05.031299    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:05.031299    2404 command_runner.go:130] > Allocatable:
	I0318 13:11:05.031299    2404 command_runner.go:130] >   cpu:                2
	I0318 13:11:05.031299    2404 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0318 13:11:05.031299    2404 command_runner.go:130] >   hugepages-2Mi:      0
	I0318 13:11:05.031299    2404 command_runner.go:130] >   memory:             2164268Ki
	I0318 13:11:05.031410    2404 command_runner.go:130] >   pods:               110
	I0318 13:11:05.031410    2404 command_runner.go:130] > System Info:
	I0318 13:11:05.031410    2404 command_runner.go:130] >   Machine ID:                 f96e7421441b46c0a5836e2d53b26708
	I0318 13:11:05.031410    2404 command_runner.go:130] >   System UUID:                7dae14c5-92ae-d842-8ce6-c446c0352eb2
	I0318 13:11:05.031410    2404 command_runner.go:130] >   Boot ID:                    7ef4b157-1893-48d2-9b87-d5f210c11477
	I0318 13:11:05.031410    2404 command_runner.go:130] >   Kernel Version:             5.10.207
	I0318 13:11:05.031410    2404 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0318 13:11:05.031495    2404 command_runner.go:130] >   Operating System:           linux
	I0318 13:11:05.031495    2404 command_runner.go:130] >   Architecture:               amd64
	I0318 13:11:05.031495    2404 command_runner.go:130] >   Container Runtime Version:  docker://25.0.4
	I0318 13:11:05.031495    2404 command_runner.go:130] >   Kubelet Version:            v1.28.4
	I0318 13:11:05.031495    2404 command_runner.go:130] >   Kube-Proxy Version:         v1.28.4
	I0318 13:11:05.031495    2404 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0318 13:11:05.031495    2404 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0318 13:11:05.031495    2404 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0318 13:11:05.031495    2404 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0318 13:11:05.031605    2404 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0318 13:11:05.031605    2404 command_runner.go:130] >   kube-system                 kindnet-zv9tv       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	I0318 13:11:05.031605    2404 command_runner.go:130] >   kube-system                 kube-proxy-745w9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	I0318 13:11:05.031605    2404 command_runner.go:130] > Allocated resources:
	I0318 13:11:05.031605    2404 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0318 13:11:05.031605    2404 command_runner.go:130] >   Resource           Requests   Limits
	I0318 13:11:05.031605    2404 command_runner.go:130] >   --------           --------   ------
	I0318 13:11:05.031605    2404 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0318 13:11:05.031605    2404 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0318 13:11:05.031605    2404 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 13:11:05.031739    2404 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0318 13:11:05.031739    2404 command_runner.go:130] > Events:
	I0318 13:11:05.031739    2404 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0318 13:11:05.031739    2404 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0318 13:11:05.031739    2404 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0318 13:11:05.031739    2404 command_runner.go:130] >   Normal  Starting                 5m36s                  kube-proxy       
	I0318 13:11:05.031856    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x5 over 16m)      kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientMemory
	I0318 13:11:05.031856    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x5 over 16m)      kubelet          Node multinode-894400-m03 status is now: NodeHasNoDiskPressure
	I0318 13:11:05.031856    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x5 over 16m)      kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientPID
	I0318 13:11:05.031856    2404 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-894400-m03 status is now: NodeReady
	I0318 13:11:05.031856    2404 command_runner.go:130] >   Normal  Starting                 5m40s                  kubelet          Starting kubelet.
	I0318 13:11:05.031856    2404 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m40s (x2 over 5m40s)  kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientMemory
	I0318 13:11:05.031856    2404 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m40s (x2 over 5m40s)  kubelet          Node multinode-894400-m03 status is now: NodeHasNoDiskPressure
	I0318 13:11:05.031958    2404 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m40s (x2 over 5m40s)  kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientPID
	I0318 13:11:05.031958    2404 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m40s                  kubelet          Updated Node Allocatable limit across pods
	I0318 13:11:05.031958    2404 command_runner.go:130] >   Normal  RegisteredNode           5m39s                  node-controller  Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller
	I0318 13:11:05.031958    2404 command_runner.go:130] >   Normal  NodeReady                5m31s                  kubelet          Node multinode-894400-m03 status is now: NodeReady
	I0318 13:11:05.031958    2404 command_runner.go:130] >   Normal  NodeNotReady             3m54s                  node-controller  Node multinode-894400-m03 status is now: NodeNotReady
	I0318 13:11:05.031958    2404 command_runner.go:130] >   Normal  RegisteredNode           64s                    node-controller  Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller
	I0318 13:11:05.041887    2404 logs.go:123] Gathering logs for etcd [5f0887d1e691] ...
	I0318 13:11:05.042817    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f0887d1e691"
	I0318 13:11:05.069165    2404 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T13:09:44.778754Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 13:11:05.069606    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.779618Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.30.130.156:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.30.130.156:2380","--initial-cluster=multinode-894400=https://172.30.130.156:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.30.130.156:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.30.130.156:2380","--name=multinode-894400","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0318 13:11:05.074954    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.780287Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0318 13:11:05.074988    2404 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T13:09:44.780316Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0318 13:11:05.074988    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.780326Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.30.130.156:2380"]}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.780518Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.782775Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.30.130.156:2379"]}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.785511Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-894400","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.30.130.156:2380"],"listen-peer-urls":["https://172.30.130.156:2380"],"advertise-client-urls":["https://172.30.130.156:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.30.130.156:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"init
ial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.809621Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"22.951578ms"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.849189Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.872854Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"2db881e830cc2153","local-member-id":"c2557cd98fa8d31a","commit-index":1981}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.87358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a switched to configuration voters=()"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.873736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became follower at term 2"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.873929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft c2557cd98fa8d31a [peers: [], term: 2, commit: 1981, applied: 0, lastindex: 1981, lastterm: 2]"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"warn","ts":"2024-03-18T13:09:44.887865Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.892732Z","caller":"mvcc/kvstore.go:323","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1376}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.89955Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1715}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.914592Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.926835Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"c2557cd98fa8d31a","timeout":"7s"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.928545Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"c2557cd98fa8d31a"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.930225Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"c2557cd98fa8d31a","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.930859Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.931762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a switched to configuration voters=(14003235890238378778)"}
	I0318 13:11:05.075133    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.932128Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2db881e830cc2153","local-member-id":"c2557cd98fa8d31a","added-peer-id":"c2557cd98fa8d31a","added-peer-peer-urls":["https://172.30.129.141:2380"]}
	I0318 13:11:05.075728    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.933388Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2db881e830cc2153","local-member-id":"c2557cd98fa8d31a","cluster-version":"3.5"}
	I0318 13:11:05.075728    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.933717Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0318 13:11:05.075804    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.946226Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0318 13:11:05.075804    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.947818Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0318 13:11:05.075907    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.948803Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0318 13:11:05.075947    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.954567Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.954988Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"c2557cd98fa8d31a","initial-advertise-peer-urls":["https://172.30.130.156:2380"],"listen-peer-urls":["https://172.30.130.156:2380"],"advertise-client-urls":["https://172.30.130.156:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.30.130.156:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.955173Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.954599Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.30.130.156:2380"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:44.956126Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.30.130.156:2380"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a is starting a new election at term 2"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became pre-candidate at term 2"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a received MsgPreVoteResp from c2557cd98fa8d31a at term 2"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became candidate at term 3"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.77574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a received MsgVoteResp from c2557cd98fa8d31a at term 3"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became leader at term 3"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.775764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c2557cd98fa8d31a elected leader c2557cd98fa8d31a at term 3"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.782683Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c2557cd98fa8d31a","local-member-attributes":"{Name:multinode-894400 ClientURLs:[https://172.30.130.156:2379]}","request-path":"/0/members/c2557cd98fa8d31a/attributes","cluster-id":"2db881e830cc2153","publish-timeout":"7s"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.78269Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.782706Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.783976Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.30.130.156:2379"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.783993Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.788664Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0318 13:11:05.076032    2404 command_runner.go:130] ! {"level":"info","ts":"2024-03-18T13:09:46.788817Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0318 13:11:05.081940    2404 logs.go:123] Gathering logs for kube-scheduler [e4d42739ce0e] ...
	I0318 13:11:05.081940    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4d42739ce0e"
	I0318 13:11:05.108082    2404 command_runner.go:130] ! I0318 12:47:23.427784       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:11:05.108386    2404 command_runner.go:130] ! W0318 12:47:24.381993       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0318 13:11:05.108386    2404 command_runner.go:130] ! W0318 12:47:24.382186       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:05.108699    2404 command_runner.go:130] ! W0318 12:47:24.382237       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0318 13:11:05.109472    2404 command_runner.go:130] ! W0318 12:47:24.382251       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:11:05.109653    2404 command_runner.go:130] ! I0318 12:47:24.461225       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 13:11:05.109721    2404 command_runner.go:130] ! I0318 12:47:24.461511       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:05.110472    2404 command_runner.go:130] ! I0318 12:47:24.465946       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 13:11:05.110621    2404 command_runner.go:130] ! I0318 12:47:24.466246       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:11:05.110621    2404 command_runner.go:130] ! I0318 12:47:24.466280       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:11:05.110655    2404 command_runner.go:130] ! I0318 12:47:24.473793       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:11:05.110655    2404 command_runner.go:130] ! W0318 12:47:24.487135       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:05.110741    2404 command_runner.go:130] ! E0318 12:47:24.487240       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:05.110741    2404 command_runner.go:130] ! W0318 12:47:24.519325       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:11:05.110799    2404 command_runner.go:130] ! E0318 12:47:24.519853       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:11:05.110799    2404 command_runner.go:130] ! W0318 12:47:24.520361       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:11:05.110859    2404 command_runner.go:130] ! E0318 12:47:24.520484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:11:05.110859    2404 command_runner.go:130] ! W0318 12:47:24.520711       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:11:05.110924    2404 command_runner.go:130] ! E0318 12:47:24.522735       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:11:05.110988    2404 command_runner.go:130] ! W0318 12:47:24.523312       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:11:05.110988    2404 command_runner.go:130] ! E0318 12:47:24.523462       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:11:05.111044    2404 command_runner.go:130] ! W0318 12:47:24.523710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:11:05.111044    2404 command_runner.go:130] ! E0318 12:47:24.523900       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:11:05.111044    2404 command_runner.go:130] ! W0318 12:47:24.524226       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.111117    2404 command_runner.go:130] ! E0318 12:47:24.524422       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.111117    2404 command_runner.go:130] ! W0318 12:47:24.524710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:11:05.111172    2404 command_runner.go:130] ! E0318 12:47:24.525125       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:11:05.111235    2404 command_runner.go:130] ! W0318 12:47:24.525523       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.111235    2404 command_runner.go:130] ! E0318 12:47:24.525746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.111289    2404 command_runner.go:130] ! W0318 12:47:24.526240       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:11:05.111289    2404 command_runner.go:130] ! E0318 12:47:24.526443       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:11:05.111377    2404 command_runner.go:130] ! W0318 12:47:24.526703       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 13:11:05.111377    2404 command_runner.go:130] ! E0318 12:47:24.526852       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 13:11:05.111432    2404 command_runner.go:130] ! W0318 12:47:24.527382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:11:05.111535    2404 command_runner.go:130] ! E0318 12:47:24.527873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:11:05.111535    2404 command_runner.go:130] ! W0318 12:47:24.528117       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:11:05.111606    2404 command_runner.go:130] ! E0318 12:47:24.528748       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:11:05.111606    2404 command_runner.go:130] ! W0318 12:47:24.529179       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.111668    2404 command_runner.go:130] ! E0318 12:47:24.529832       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.111668    2404 command_runner.go:130] ! W0318 12:47:24.530406       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.111739    2404 command_runner.go:130] ! E0318 12:47:24.532696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.111739    2404 command_runner.go:130] ! W0318 12:47:25.371082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.111739    2404 command_runner.go:130] ! E0318 12:47:25.371130       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.111801    2404 command_runner.go:130] ! W0318 12:47:25.374605       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:11:05.111855    2404 command_runner.go:130] ! E0318 12:47:25.374678       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 13:11:05.111855    2404 command_runner.go:130] ! W0318 12:47:25.400777       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:11:05.111916    2404 command_runner.go:130] ! E0318 12:47:25.400820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0318 13:11:05.111916    2404 command_runner.go:130] ! W0318 12:47:25.434442       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:05.111977    2404 command_runner.go:130] ! E0318 12:47:25.434526       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:11:05.111977    2404 command_runner.go:130] ! W0318 12:47:25.456878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:11:05.111977    2404 command_runner.go:130] ! E0318 12:47:25.457121       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:11:05.112053    2404 command_runner.go:130] ! W0318 12:47:25.744652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:11:05.112107    2404 command_runner.go:130] ! E0318 12:47:25.744733       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 13:11:05.112107    2404 command_runner.go:130] ! W0318 12:47:25.777073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.112185    2404 command_runner.go:130] ! E0318 12:47:25.777145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.112185    2404 command_runner.go:130] ! W0318 12:47:25.850949       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:11:05.112241    2404 command_runner.go:130] ! E0318 12:47:25.850985       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0318 13:11:05.112241    2404 command_runner.go:130] ! W0318 12:47:25.876908       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:11:05.112300    2404 command_runner.go:130] ! E0318 12:47:25.877170       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0318 13:11:05.112300    2404 command_runner.go:130] ! W0318 12:47:25.892072       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:11:05.112372    2404 command_runner.go:130] ! E0318 12:47:25.892099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0318 13:11:05.112432    2404 command_runner.go:130] ! W0318 12:47:25.988864       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:11:05.112503    2404 command_runner.go:130] ! E0318 12:47:25.988912       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0318 13:11:05.112503    2404 command_runner.go:130] ! W0318 12:47:26.044749       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:11:05.112563    2404 command_runner.go:130] ! E0318 12:47:26.044834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 13:11:05.112563    2404 command_runner.go:130] ! W0318 12:47:26.067659       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.112625    2404 command_runner.go:130] ! E0318 12:47:26.068250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 13:11:05.112625    2404 command_runner.go:130] ! I0318 12:47:28.178584       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:11:05.112625    2404 command_runner.go:130] ! I0318 13:07:24.107367       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0318 13:11:05.112686    2404 command_runner.go:130] ! I0318 13:07:24.107975       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0318 13:11:05.112686    2404 command_runner.go:130] ! E0318 13:07:24.108193       1 run.go:74] "command failed" err="finished without leader elect"
	I0318 13:11:05.123795    2404 logs.go:123] Gathering logs for kube-controller-manager [7aa5cf4ec378] ...
	I0318 13:11:05.123795    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aa5cf4ec378"
	I0318 13:11:05.151527    2404 command_runner.go:130] ! I0318 12:47:22.447675       1 serving.go:348] Generated self-signed cert in-memory
	I0318 13:11:05.152291    2404 command_runner.go:130] ! I0318 12:47:22.964394       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 13:11:05.152291    2404 command_runner.go:130] ! I0318 12:47:22.964509       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:11:05.152291    2404 command_runner.go:130] ! I0318 12:47:22.966671       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:11:05.152291    2404 command_runner.go:130] ! I0318 12:47:22.967091       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:11:05.152427    2404 command_runner.go:130] ! I0318 12:47:22.968348       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 13:11:05.152427    2404 command_runner.go:130] ! I0318 12:47:22.969286       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:11:05.152427    2404 command_runner.go:130] ! I0318 12:47:27.391471       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0318 13:11:05.152472    2404 command_runner.go:130] ! I0318 12:47:27.423488       1 controllermanager.go:642] "Started controller" controller="garbage-collector-controller"
	I0318 13:11:05.152472    2404 command_runner.go:130] ! I0318 12:47:27.424256       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:27.424289       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:27.424374       1 graph_builder.go:294] "Running" component="GraphBuilder"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:27.451725       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:27.451967       1 controller.go:169] "Starting ephemeral volume controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:27.452425       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:27.464873       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:27.465150       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:27.465172       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:27.491949       1 shared_informer.go:318] Caches are synced for tokens
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.491900       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.492009       1 controllermanager.go:642] "Started controller" controller="node-ipam-controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.492602       1 node_ipam_controller.go:162] "Starting ipam controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.492659       1 shared_informer.go:311] Waiting for caches to sync for node
	I0318 13:11:05.152502    2404 command_runner.go:130] ! E0318 12:47:37.494780       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.494859       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.511992       1 controllermanager.go:642] "Started controller" controller="persistentvolume-binder-controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.512162       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.512576       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.525022       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.525273       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.525287       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.540701       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.540905       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0318 13:11:05.152502    2404 command_runner.go:130] ! I0318 12:47:37.540914       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0318 13:11:05.153456    2404 command_runner.go:130] ! I0318 12:47:37.562000       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0318 13:11:05.153747    2404 command_runner.go:130] ! I0318 12:47:37.562256       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0318 13:11:05.153747    2404 command_runner.go:130] ! I0318 12:47:37.562286       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0318 13:11:05.153956    2404 command_runner.go:130] ! I0318 12:47:37.574397       1 controllermanager.go:642] "Started controller" controller="persistentvolume-expander-controller"
	I0318 13:11:05.154090    2404 command_runner.go:130] ! I0318 12:47:37.574869       1 expand_controller.go:328] "Starting expand controller"
	I0318 13:11:05.154090    2404 command_runner.go:130] ! I0318 12:47:37.574937       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0318 13:11:05.154090    2404 command_runner.go:130] ! I0318 12:47:37.587914       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0318 13:11:05.154090    2404 command_runner.go:130] ! I0318 12:47:37.588166       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0318 13:11:05.154090    2404 command_runner.go:130] ! I0318 12:47:37.588199       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0318 13:11:05.154184    2404 command_runner.go:130] ! I0318 12:47:37.609721       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0318 13:11:05.154184    2404 command_runner.go:130] ! I0318 12:47:37.615354       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 13:11:05.154184    2404 command_runner.go:130] ! I0318 12:47:37.615371       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0318 13:11:05.154184    2404 command_runner.go:130] ! I0318 12:47:37.624660       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0318 13:11:05.154184    2404 command_runner.go:130] ! I0318 12:47:37.624898       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 13:11:05.154184    2404 command_runner.go:130] ! I0318 12:47:37.625063       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 13:11:05.154184    2404 command_runner.go:130] ! I0318 12:47:37.637461       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0318 13:11:05.154343    2404 command_runner.go:130] ! I0318 12:47:37.637588       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0318 13:11:05.154423    2404 command_runner.go:130] ! I0318 12:47:37.637699       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0318 13:11:05.154523    2404 command_runner.go:130] ! I0318 12:47:37.649314       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0318 13:11:05.154523    2404 command_runner.go:130] ! I0318 12:47:37.650380       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0318 13:11:05.154523    2404 command_runner.go:130] ! I0318 12:47:37.650462       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0318 13:11:05.154523    2404 command_runner.go:130] ! I0318 12:47:37.830447       1 controllermanager.go:642] "Started controller" controller="disruption-controller"
	I0318 13:11:05.154523    2404 command_runner.go:130] ! I0318 12:47:37.830565       1 disruption.go:433] "Sending events to api server."
	I0318 13:11:05.154636    2404 command_runner.go:130] ! I0318 12:47:37.830686       1 disruption.go:444] "Starting disruption controller"
	I0318 13:11:05.154636    2404 command_runner.go:130] ! I0318 12:47:37.830725       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0318 13:11:05.154636    2404 command_runner.go:130] ! I0318 12:47:37.985254       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I0318 13:11:05.154636    2404 command_runner.go:130] ! I0318 12:47:37.985453       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0318 13:11:05.154636    2404 command_runner.go:130] ! I0318 12:47:37.985784       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0318 13:11:05.154636    2404 command_runner.go:130] ! I0318 12:47:38.288543       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0318 13:11:05.154636    2404 command_runner.go:130] ! I0318 12:47:38.289132       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0318 13:11:05.154722    2404 command_runner.go:130] ! I0318 12:47:38.289248       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0318 13:11:05.154747    2404 command_runner.go:130] ! I0318 12:47:38.289520       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0318 13:11:05.154747    2404 command_runner.go:130] ! I0318 12:47:38.289722       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0318 13:11:05.154747    2404 command_runner.go:130] ! I0318 12:47:38.289927       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0318 13:11:05.154857    2404 command_runner.go:130] ! I0318 12:47:38.290240       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0318 13:11:05.154882    2404 command_runner.go:130] ! I0318 12:47:38.290340       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0318 13:11:05.154934    2404 command_runner.go:130] ! I0318 12:47:38.290418       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0318 13:11:05.154934    2404 command_runner.go:130] ! I0318 12:47:38.290502       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0318 13:11:05.154967    2404 command_runner.go:130] ! I0318 12:47:38.290550       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0318 13:11:05.154967    2404 command_runner.go:130] ! I0318 12:47:38.290591       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0318 13:11:05.155005    2404 command_runner.go:130] ! I0318 12:47:38.290851       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0318 13:11:05.155045    2404 command_runner.go:130] ! I0318 12:47:38.291026       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0318 13:11:05.155045    2404 command_runner.go:130] ! I0318 12:47:38.291117       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.291149       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.291277       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.291315       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.291392       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.291423       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.291465       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.291591       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.291607       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.291720       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.436018       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.436093       1 job_controller.go:226] "Starting job controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.436112       1 shared_informer.go:311] Waiting for caches to sync for job
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.731490       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.731606       1 horizontal.go:200] "Starting HPA controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.731671       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.886224       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.886401       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.886705       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.930325       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.930354       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.930362       1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:38.930398       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:39.085782       1 controllermanager.go:642] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:39.085905       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0318 13:11:05.155073    2404 command_runner.go:130] ! I0318 12:47:39.085920       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0318 13:11:05.155616    2404 command_runner.go:130] ! I0318 12:47:39.236755       1 controllermanager.go:642] "Started controller" controller="endpointslice-mirroring-controller"
	I0318 13:11:05.155616    2404 command_runner.go:130] ! I0318 12:47:39.237434       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0318 13:11:05.155616    2404 command_runner.go:130] ! I0318 12:47:39.237522       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0318 13:11:05.155616    2404 command_runner.go:130] ! I0318 12:47:39.390953       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0318 13:11:05.155837    2404 command_runner.go:130] ! I0318 12:47:39.391480       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0318 13:11:05.155837    2404 command_runner.go:130] ! I0318 12:47:39.391646       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0318 13:11:05.155889    2404 command_runner.go:130] ! I0318 12:47:39.535570       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0318 13:11:05.155889    2404 command_runner.go:130] ! I0318 12:47:39.536071       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0318 13:11:05.155889    2404 command_runner.go:130] ! I0318 12:47:39.536172       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0318 13:11:05.155889    2404 command_runner.go:130] ! I0318 12:47:39.582776       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 13:11:05.155889    2404 command_runner.go:130] ! I0318 12:47:39.582876       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 13:11:05.155969    2404 command_runner.go:130] ! I0318 12:47:39.582912       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:05.155969    2404 command_runner.go:130] ! I0318 12:47:39.584602       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 13:11:05.156009    2404 command_runner.go:130] ! I0318 12:47:39.584677       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 13:11:05.156009    2404 command_runner.go:130] ! I0318 12:47:39.584724       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:05.156009    2404 command_runner.go:130] ! I0318 12:47:39.585974       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 13:11:05.156050    2404 command_runner.go:130] ! I0318 12:47:39.585990       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 13:11:05.156080    2404 command_runner.go:130] ! I0318 12:47:39.586012       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:05.156080    2404 command_runner.go:130] ! I0318 12:47:39.586910       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 13:11:05.156080    2404 command_runner.go:130] ! I0318 12:47:39.586968       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 13:11:05.156120    2404 command_runner.go:130] ! I0318 12:47:39.586975       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 13:11:05.156120    2404 command_runner.go:130] ! I0318 12:47:39.587044       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:11:05.156154    2404 command_runner.go:130] ! I0318 12:47:39.735265       1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:39.735467       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:39.735494       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:39.735502       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:39.783594       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:39.783722       1 controllermanager.go:642] "Started controller" controller="node-lifecycle-controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:39.783841       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:39.783860       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:39.784031       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0318 13:11:05.156186    2404 command_runner.go:130] ! E0318 12:47:39.937206       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:39.937229       1 controllermanager.go:620] "Warning: skipping controller" controller="service-lb-controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.089508       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.089701       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.089793       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.235860       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.235977       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.236063       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.386545       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.386692       1 gc_controller.go:101] "Starting GC controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.386704       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.644175       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.644284       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.644293       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 13:11:05.156186    2404 command_runner.go:130] ! I0318 12:47:40.784991       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:40.785464       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:40.785492       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:40.936785       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:40.939800       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:40.947184       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:40.968017       1 shared_informer.go:318] Caches are synced for service account
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:40.971773       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:40.976691       1 shared_informer.go:318] Caches are synced for expand
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:40.986014       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:40.995675       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:41.009015       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400\" does not exist"
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:41.012612       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:41.016383       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:41.025198       1 shared_informer.go:318] Caches are synced for TTL
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:41.025462       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:41.032086       1 shared_informer.go:318] Caches are synced for HPA
	I0318 13:11:05.156853    2404 command_runner.go:130] ! I0318 12:47:41.036463       1 shared_informer.go:318] Caches are synced for job
	I0318 13:11:05.157395    2404 command_runner.go:130] ! I0318 12:47:41.036622       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 13:11:05.157451    2404 command_runner.go:130] ! I0318 12:47:41.036726       1 shared_informer.go:318] Caches are synced for crt configmap
	I0318 13:11:05.157451    2404 command_runner.go:130] ! I0318 12:47:41.037735       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.037818       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.040360       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.041850       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.045379       1 shared_informer.go:318] Caches are synced for namespace
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.051530       1 shared_informer.go:318] Caches are synced for deployment
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.053151       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.063027       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.084212       1 shared_informer.go:318] Caches are synced for taint
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.084612       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.087983       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.088464       1 taint_manager.go:210] "Sending events to api server"
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.089485       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400"
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.089526       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.089552       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.089942       1 shared_informer.go:318] Caches are synced for GC
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.090031       1 event.go:307] "Event occurred" object="multinode-894400" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400 event: Registered Node multinode-894400 in Controller"
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.090167       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.090848       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.092093       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.092684       1 shared_informer.go:318] Caches are synced for node
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.093255       1 range_allocator.go:174] "Sending events to api server"
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.093537       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.093851       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.093958       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.119414       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400" podCIDRs=["10.244.0.0/24"]
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.148134       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.183853       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.184949       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.186043       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.187192       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.187229       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.192066       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:11:05.157489    2404 command_runner.go:130] ! I0318 12:47:41.233781       1 shared_informer.go:318] Caches are synced for disruption
	I0318 13:11:05.158014    2404 command_runner.go:130] ! I0318 12:47:41.572914       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:11:05.158014    2404 command_runner.go:130] ! I0318 12:47:41.612936       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mc5tv"
	I0318 13:11:05.158084    2404 command_runner.go:130] ! I0318 12:47:41.615780       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hhsxh"
	I0318 13:11:05.158084    2404 command_runner.go:130] ! I0318 12:47:41.625871       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:41.626335       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:41.893141       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:42.112244       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vl6jr"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:42.148022       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-456tm"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:42.181940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="289.6659ms"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:42.245823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.840303ms"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:42.246151       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.996µs"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:42.470958       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:42.530265       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-vl6jr"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:42.551794       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.491503ms"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:42.587026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="35.184179ms"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:42.587126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.497µs"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:52.958102       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="163.297µs"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:52.991751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.399µs"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:54.194916       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.289µs"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:55.238088       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.595936ms"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:55.238222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="45.592µs"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:47:56.090728       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:50:34.419485       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m02\" does not exist"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:50:34.437576       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m02" podCIDRs=["10.244.1.0/24"]
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:50:34.454919       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8bdmn"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:50:34.479103       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k5lpg"
	I0318 13:11:05.158130    2404 command_runner.go:130] ! I0318 12:50:36.121925       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m02"
	I0318 13:11:05.158650    2404 command_runner.go:130] ! I0318 12:50:36.122368       1 event.go:307] "Event occurred" object="multinode-894400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller"
	I0318 13:11:05.158650    2404 command_runner.go:130] ! I0318 12:50:52.539955       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:05.158725    2404 command_runner.go:130] ! I0318 12:51:17.964827       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0318 13:11:05.158725    2404 command_runner.go:130] ! I0318 12:51:17.986964       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-8btgf"
	I0318 13:11:05.158797    2404 command_runner.go:130] ! I0318 12:51:18.004592       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-c2997"
	I0318 13:11:05.158797    2404 command_runner.go:130] ! I0318 12:51:18.026894       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="62.79508ms"
	I0318 13:11:05.158857    2404 command_runner.go:130] ! I0318 12:51:18.045074       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="17.513513ms"
	I0318 13:11:05.158857    2404 command_runner.go:130] ! I0318 12:51:18.046404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="36.101µs"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:51:18.054157       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="337.914µs"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:51:18.060516       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="26.701µs"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:51:20.804047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="10.125602ms"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:51:20.804333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="159.502µs"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:51:21.064706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="11.788417ms"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:51:21.065229       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="82.401µs"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:55:05.793350       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m03\" does not exist"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:55:05.797095       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:55:05.823205       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zv9tv"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:55:05.835101       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m03" podCIDRs=["10.244.2.0/24"]
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:55:05.835149       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-745w9"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:55:06.188986       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:55:06.188988       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m03"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 12:55:23.671742       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 13:02:46.325539       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 13:02:46.325935       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-894400-m03 status is now: NodeNotReady"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 13:02:46.344510       1 event.go:307] "Event occurred" object="kube-system/kindnet-zv9tv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 13:02:46.368811       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-745w9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 13:05:19.649225       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 13:05:21.403124       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-894400-m03 event: Removing Node multinode-894400-m03 from Controller"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 13:05:25.832056       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m03\" does not exist"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 13:05:25.832348       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 13:05:25.841443       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m03" podCIDRs=["10.244.3.0/24"]
	I0318 13:11:05.158884    2404 command_runner.go:130] ! I0318 13:05:26.404299       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller"
	I0318 13:11:05.159614    2404 command_runner.go:130] ! I0318 13:05:34.080951       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:05.159743    2404 command_runner.go:130] ! I0318 13:07:11.961036       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-894400-m03 status is now: NodeNotReady"
	I0318 13:11:05.159743    2404 command_runner.go:130] ! I0318 13:07:11.961077       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:11:05.159807    2404 command_runner.go:130] ! I0318 13:07:12.051526       1 event.go:307] "Event occurred" object="kube-system/kindnet-zv9tv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:05.159807    2404 command_runner.go:130] ! I0318 13:07:12.098168       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-745w9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:11:05.175954    2404 logs.go:123] Gathering logs for kindnet [c4d7018ad23a] ...
	I0318 13:11:05.175954    2404 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4d7018ad23a"
	I0318 13:11:05.208668    2404 command_runner.go:130] ! I0318 12:56:20.031595       1 main.go:227] handling current node
	I0318 13:11:05.208668    2404 command_runner.go:130] ! I0318 12:56:20.031610       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.208668    2404 command_runner.go:130] ! I0318 12:56:20.031618       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.208668    2404 command_runner.go:130] ! I0318 12:56:20.031800       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.208668    2404 command_runner.go:130] ! I0318 12:56:20.031837       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.208668    2404 command_runner.go:130] ! I0318 12:56:30.038705       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.209637    2404 command_runner.go:130] ! I0318 12:56:30.038812       1 main.go:227] handling current node
	I0318 13:11:05.209637    2404 command_runner.go:130] ! I0318 12:56:30.038826       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.209637    2404 command_runner.go:130] ! I0318 12:56:30.038833       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.209637    2404 command_runner.go:130] ! I0318 12:56:30.039027       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.209637    2404 command_runner.go:130] ! I0318 12:56:30.039347       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.209637    2404 command_runner.go:130] ! I0318 12:56:40.051950       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.209637    2404 command_runner.go:130] ! I0318 12:56:40.052053       1 main.go:227] handling current node
	I0318 13:11:05.209637    2404 command_runner.go:130] ! I0318 12:56:40.052086       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.209637    2404 command_runner.go:130] ! I0318 12:56:40.052204       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.209637    2404 command_runner.go:130] ! I0318 12:56:40.052568       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.210751    2404 command_runner.go:130] ! I0318 12:56:40.052681       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.210831    2404 command_runner.go:130] ! I0318 12:56:50.074059       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.210831    2404 command_runner.go:130] ! I0318 12:56:50.074164       1 main.go:227] handling current node
	I0318 13:11:05.210859    2404 command_runner.go:130] ! I0318 12:56:50.074183       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.210885    2404 command_runner.go:130] ! I0318 12:56:50.074192       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.210885    2404 command_runner.go:130] ! I0318 12:56:50.075009       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.210923    2404 command_runner.go:130] ! I0318 12:56:50.075306       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.210923    2404 command_runner.go:130] ! I0318 12:57:00.089286       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.210973    2404 command_runner.go:130] ! I0318 12:57:00.089382       1 main.go:227] handling current node
	I0318 13:11:05.210973    2404 command_runner.go:130] ! I0318 12:57:00.089397       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.211011    2404 command_runner.go:130] ! I0318 12:57:00.089405       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.211011    2404 command_runner.go:130] ! I0318 12:57:00.089918       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.211011    2404 command_runner.go:130] ! I0318 12:57:00.089934       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.211055    2404 command_runner.go:130] ! I0318 12:57:10.103457       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.211055    2404 command_runner.go:130] ! I0318 12:57:10.103575       1 main.go:227] handling current node
	I0318 13:11:05.211055    2404 command_runner.go:130] ! I0318 12:57:10.103607       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.211108    2404 command_runner.go:130] ! I0318 12:57:10.103704       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.211140    2404 command_runner.go:130] ! I0318 12:57:10.104106       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.211140    2404 command_runner.go:130] ! I0318 12:57:10.104144       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.211140    2404 command_runner.go:130] ! I0318 12:57:20.111225       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.211176    2404 command_runner.go:130] ! I0318 12:57:20.111346       1 main.go:227] handling current node
	I0318 13:11:05.211176    2404 command_runner.go:130] ! I0318 12:57:20.111360       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.212568    2404 command_runner.go:130] ! I0318 12:57:20.111367       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.212783    2404 command_runner.go:130] ! I0318 12:57:20.111695       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.213072    2404 command_runner.go:130] ! I0318 12:57:20.111775       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.213072    2404 command_runner.go:130] ! I0318 12:57:30.124283       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.213072    2404 command_runner.go:130] ! I0318 12:57:30.124477       1 main.go:227] handling current node
	I0318 13:11:05.213072    2404 command_runner.go:130] ! I0318 12:57:30.124495       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.213072    2404 command_runner.go:130] ! I0318 12:57:30.124505       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.213072    2404 command_runner.go:130] ! I0318 12:57:30.125279       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.213072    2404 command_runner.go:130] ! I0318 12:57:30.125393       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.213072    2404 command_runner.go:130] ! I0318 12:57:40.137523       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.213072    2404 command_runner.go:130] ! I0318 12:57:40.137766       1 main.go:227] handling current node
	I0318 13:11:05.213072    2404 command_runner.go:130] ! I0318 12:57:40.137807       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.213666    2404 command_runner.go:130] ! I0318 12:57:40.137833       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.213666    2404 command_runner.go:130] ! I0318 12:57:40.137998       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.213666    2404 command_runner.go:130] ! I0318 12:57:40.138087       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.213666    2404 command_runner.go:130] ! I0318 12:57:50.149548       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.213666    2404 command_runner.go:130] ! I0318 12:57:50.149697       1 main.go:227] handling current node
	I0318 13:11:05.213666    2404 command_runner.go:130] ! I0318 12:57:50.149712       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.213666    2404 command_runner.go:130] ! I0318 12:57:50.149720       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.213666    2404 command_runner.go:130] ! I0318 12:57:50.150251       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.213666    2404 command_runner.go:130] ! I0318 12:57:50.150344       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.213784    2404 command_runner.go:130] ! I0318 12:58:00.159094       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.213784    2404 command_runner.go:130] ! I0318 12:58:00.159284       1 main.go:227] handling current node
	I0318 13:11:05.213784    2404 command_runner.go:130] ! I0318 12:58:00.159340       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.213784    2404 command_runner.go:130] ! I0318 12:58:00.159700       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.213784    2404 command_runner.go:130] ! I0318 12:58:00.160303       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.213784    2404 command_runner.go:130] ! I0318 12:58:00.160346       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.213784    2404 command_runner.go:130] ! I0318 12:58:10.177603       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.213784    2404 command_runner.go:130] ! I0318 12:58:10.177780       1 main.go:227] handling current node
	I0318 13:11:05.213873    2404 command_runner.go:130] ! I0318 12:58:10.178122       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.213873    2404 command_runner.go:130] ! I0318 12:58:10.178166       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.213873    2404 command_runner.go:130] ! I0318 12:58:10.178455       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.213873    2404 command_runner.go:130] ! I0318 12:58:10.178497       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.213873    2404 command_runner.go:130] ! I0318 12:58:20.196110       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.213873    2404 command_runner.go:130] ! I0318 12:58:20.196144       1 main.go:227] handling current node
	I0318 13:11:05.213957    2404 command_runner.go:130] ! I0318 12:58:20.196236       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.213957    2404 command_runner.go:130] ! I0318 12:58:20.196542       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.213957    2404 command_runner.go:130] ! I0318 12:58:20.196774       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.213957    2404 command_runner.go:130] ! I0318 12:58:20.196867       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.213957    2404 command_runner.go:130] ! I0318 12:58:30.204485       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.213957    2404 command_runner.go:130] ! I0318 12:58:30.204515       1 main.go:227] handling current node
	I0318 13:11:05.213957    2404 command_runner.go:130] ! I0318 12:58:30.204528       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.213957    2404 command_runner.go:130] ! I0318 12:58:30.204556       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.213957    2404 command_runner.go:130] ! I0318 12:58:30.204856       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.213957    2404 command_runner.go:130] ! I0318 12:58:30.205022       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214066    2404 command_runner.go:130] ! I0318 12:58:40.221076       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214066    2404 command_runner.go:130] ! I0318 12:58:40.221184       1 main.go:227] handling current node
	I0318 13:11:05.214066    2404 command_runner.go:130] ! I0318 12:58:40.221201       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214066    2404 command_runner.go:130] ! I0318 12:58:40.221210       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214066    2404 command_runner.go:130] ! I0318 12:58:40.221741       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.214136    2404 command_runner.go:130] ! I0318 12:58:40.221769       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214136    2404 command_runner.go:130] ! I0318 12:58:50.229210       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214136    2404 command_runner.go:130] ! I0318 12:58:50.229302       1 main.go:227] handling current node
	I0318 13:11:05.214173    2404 command_runner.go:130] ! I0318 12:58:50.229317       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214198    2404 command_runner.go:130] ! I0318 12:58:50.229324       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214198    2404 command_runner.go:130] ! I0318 12:58:50.229703       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.214198    2404 command_runner.go:130] ! I0318 12:58:50.229807       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214198    2404 command_runner.go:130] ! I0318 12:59:00.244905       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214198    2404 command_runner.go:130] ! I0318 12:59:00.244992       1 main.go:227] handling current node
	I0318 13:11:05.214287    2404 command_runner.go:130] ! I0318 12:59:00.245007       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214287    2404 command_runner.go:130] ! I0318 12:59:00.245033       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214287    2404 command_runner.go:130] ! I0318 12:59:00.245480       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.214287    2404 command_runner.go:130] ! I0318 12:59:00.245600       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214287    2404 command_runner.go:130] ! I0318 12:59:10.253460       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214287    2404 command_runner.go:130] ! I0318 12:59:10.253563       1 main.go:227] handling current node
	I0318 13:11:05.214287    2404 command_runner.go:130] ! I0318 12:59:10.253579       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214363    2404 command_runner.go:130] ! I0318 12:59:10.253605       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214363    2404 command_runner.go:130] ! I0318 12:59:10.254199       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.214363    2404 command_runner.go:130] ! I0318 12:59:10.254310       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214363    2404 command_runner.go:130] ! I0318 12:59:20.270774       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214415    2404 command_runner.go:130] ! I0318 12:59:20.270870       1 main.go:227] handling current node
	I0318 13:11:05.214415    2404 command_runner.go:130] ! I0318 12:59:20.270886       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214415    2404 command_runner.go:130] ! I0318 12:59:20.270894       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214415    2404 command_runner.go:130] ! I0318 12:59:20.271275       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.214415    2404 command_runner.go:130] ! I0318 12:59:20.271367       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214516    2404 command_runner.go:130] ! I0318 12:59:30.281784       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214516    2404 command_runner.go:130] ! I0318 12:59:30.281809       1 main.go:227] handling current node
	I0318 13:11:05.214516    2404 command_runner.go:130] ! I0318 12:59:30.281819       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214516    2404 command_runner.go:130] ! I0318 12:59:30.281824       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214611    2404 command_runner.go:130] ! I0318 12:59:30.282361       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.214611    2404 command_runner.go:130] ! I0318 12:59:30.282392       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214611    2404 command_runner.go:130] ! I0318 12:59:40.291176       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214611    2404 command_runner.go:130] ! I0318 12:59:40.291304       1 main.go:227] handling current node
	I0318 13:11:05.214611    2404 command_runner.go:130] ! I0318 12:59:40.291321       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214611    2404 command_runner.go:130] ! I0318 12:59:40.291328       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214611    2404 command_runner.go:130] ! I0318 12:59:40.291827       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.214611    2404 command_runner.go:130] ! I0318 12:59:40.291857       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214713    2404 command_runner.go:130] ! I0318 12:59:50.303374       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214713    2404 command_runner.go:130] ! I0318 12:59:50.303454       1 main.go:227] handling current node
	I0318 13:11:05.214713    2404 command_runner.go:130] ! I0318 12:59:50.303468       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214713    2404 command_runner.go:130] ! I0318 12:59:50.303476       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214713    2404 command_runner.go:130] ! I0318 12:59:50.303974       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.214713    2404 command_runner.go:130] ! I0318 12:59:50.304002       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214713    2404 command_runner.go:130] ! I0318 13:00:00.311317       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214713    2404 command_runner.go:130] ! I0318 13:00:00.311423       1 main.go:227] handling current node
	I0318 13:11:05.214713    2404 command_runner.go:130] ! I0318 13:00:00.311441       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:00.311449       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:00.312039       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:00.312135       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:10.324823       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:10.324902       1 main.go:227] handling current node
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:10.324915       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:10.324926       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:10.325084       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:10.325108       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:20.338195       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:20.338297       1 main.go:227] handling current node
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:20.338312       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:20.338320       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.214897    2404 command_runner.go:130] ! I0318 13:00:20.338525       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.215128    2404 command_runner.go:130] ! I0318 13:00:20.338601       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.215128    2404 command_runner.go:130] ! I0318 13:00:30.345095       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.215229    2404 command_runner.go:130] ! I0318 13:00:30.345184       1 main.go:227] handling current node
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:30.345198       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:30.345205       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:30.346074       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:30.346194       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:40.357007       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:40.357386       1 main.go:227] handling current node
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:40.357485       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:40.357513       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:40.357737       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:40.357766       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:50.372182       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:50.372221       1 main.go:227] handling current node
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:50.372235       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:50.372242       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:50.372608       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:00:50.372772       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.215349    2404 command_runner.go:130] ! I0318 13:01:00.386990       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.215602    2404 command_runner.go:130] ! I0318 13:01:00.387036       1 main.go:227] handling current node
	I0318 13:11:05.215602    2404 command_runner.go:130] ! I0318 13:01:00.387050       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.215602    2404 command_runner.go:130] ! I0318 13:01:00.387058       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.215602    2404 command_runner.go:130] ! I0318 13:01:00.387182       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.215716    2404 command_runner.go:130] ! I0318 13:01:00.387191       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.215716    2404 command_runner.go:130] ! I0318 13:01:10.396889       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.215716    2404 command_runner.go:130] ! I0318 13:01:10.396930       1 main.go:227] handling current node
	I0318 13:11:05.215749    2404 command_runner.go:130] ! I0318 13:01:10.396942       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.215781    2404 command_runner.go:130] ! I0318 13:01:10.396948       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.215781    2404 command_runner.go:130] ! I0318 13:01:10.397250       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.215781    2404 command_runner.go:130] ! I0318 13:01:10.397343       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.215781    2404 command_runner.go:130] ! I0318 13:01:20.413272       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.215781    2404 command_runner.go:130] ! I0318 13:01:20.413371       1 main.go:227] handling current node
	I0318 13:11:05.215781    2404 command_runner.go:130] ! I0318 13:01:20.413386       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.215848    2404 command_runner.go:130] ! I0318 13:01:20.413395       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.215848    2404 command_runner.go:130] ! I0318 13:01:20.413968       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.215848    2404 command_runner.go:130] ! I0318 13:01:20.413999       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.215848    2404 command_runner.go:130] ! I0318 13:01:30.429160       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.215848    2404 command_runner.go:130] ! I0318 13:01:30.429478       1 main.go:227] handling current node
	I0318 13:11:05.215922    2404 command_runner.go:130] ! I0318 13:01:30.429549       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.215922    2404 command_runner.go:130] ! I0318 13:01:30.429678       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.215999    2404 command_runner.go:130] ! I0318 13:01:30.429960       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.215999    2404 command_runner.go:130] ! I0318 13:01:30.430034       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.215999    2404 command_runner.go:130] ! I0318 13:01:40.436733       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.215999    2404 command_runner.go:130] ! I0318 13:01:40.436839       1 main.go:227] handling current node
	I0318 13:11:05.215999    2404 command_runner.go:130] ! I0318 13:01:40.436901       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.215999    2404 command_runner.go:130] ! I0318 13:01:40.436930       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.215999    2404 command_runner.go:130] ! I0318 13:01:40.437399       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216090    2404 command_runner.go:130] ! I0318 13:01:40.437431       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216090    2404 command_runner.go:130] ! I0318 13:01:50.451622       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216121    2404 command_runner.go:130] ! I0318 13:01:50.451802       1 main.go:227] handling current node
	I0318 13:11:05.216121    2404 command_runner.go:130] ! I0318 13:01:50.451849       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:01:50.451860       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:01:50.452021       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:01:50.452171       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:00.460452       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:00.460548       1 main.go:227] handling current node
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:00.460563       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:00.460571       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:00.461181       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:00.461333       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:10.474274       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:10.474396       1 main.go:227] handling current node
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:10.474427       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:10.474436       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:10.475019       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:10.475159       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:20.489442       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:20.489616       1 main.go:227] handling current node
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:20.489699       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:20.489752       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:20.490046       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:20.490082       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:30.497474       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:30.497574       1 main.go:227] handling current node
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:30.497589       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:30.497597       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:30.498279       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:30.498361       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:40.512026       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:40.512345       1 main.go:227] handling current node
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:40.512385       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:40.512477       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:40.512786       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:40.512873       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:50.520110       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:50.520239       1 main.go:227] handling current node
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:50.520254       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:50.520263       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:50.520784       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:02:50.520861       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:03:00.531866       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216148    2404 command_runner.go:130] ! I0318 13:03:00.531958       1 main.go:227] handling current node
	I0318 13:11:05.216731    2404 command_runner.go:130] ! I0318 13:03:00.531972       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216731    2404 command_runner.go:130] ! I0318 13:03:00.531979       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216731    2404 command_runner.go:130] ! I0318 13:03:00.532211       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216731    2404 command_runner.go:130] ! I0318 13:03:00.532293       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216731    2404 command_runner.go:130] ! I0318 13:03:10.543869       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216731    2404 command_runner.go:130] ! I0318 13:03:10.543913       1 main.go:227] handling current node
	I0318 13:11:05.216731    2404 command_runner.go:130] ! I0318 13:03:10.543926       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216731    2404 command_runner.go:130] ! I0318 13:03:10.543933       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216823    2404 command_runner.go:130] ! I0318 13:03:10.544294       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216823    2404 command_runner.go:130] ! I0318 13:03:10.544430       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216823    2404 command_runner.go:130] ! I0318 13:03:20.558742       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216823    2404 command_runner.go:130] ! I0318 13:03:20.558782       1 main.go:227] handling current node
	I0318 13:11:05.216823    2404 command_runner.go:130] ! I0318 13:03:20.558795       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:20.558802       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:20.558992       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:20.559009       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:30.568771       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:30.568872       1 main.go:227] handling current node
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:30.568905       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:30.568996       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:30.569367       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:30.569450       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:40.587554       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:40.587674       1 main.go:227] handling current node
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:40.588337       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:40.588356       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:40.588758       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:40.588836       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:50.596331       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:50.596438       1 main.go:227] handling current node
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:50.596453       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:50.596462       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:50.596942       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:03:50.597079       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:00.611242       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:00.611383       1 main.go:227] handling current node
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:00.611397       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:00.611405       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:00.611541       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:00.611572       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:10.624814       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:10.624904       1 main.go:227] handling current node
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:10.624920       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:10.624927       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:10.625504       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:10.625547       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:20.640319       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:20.640364       1 main.go:227] handling current node
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:20.640379       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:20.640386       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:20.640865       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:20.640901       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:30.648021       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:30.648134       1 main.go:227] handling current node
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:30.648148       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:30.648156       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:30.648313       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:30.648344       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:40.663577       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:40.663749       1 main.go:227] handling current node
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:40.663765       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:40.663774       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:40.663896       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:40.663929       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.216878    2404 command_runner.go:130] ! I0318 13:04:50.669717       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.217467    2404 command_runner.go:130] ! I0318 13:04:50.669791       1 main.go:227] handling current node
	I0318 13:11:05.217467    2404 command_runner.go:130] ! I0318 13:04:50.669805       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.217467    2404 command_runner.go:130] ! I0318 13:04:50.669812       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.217467    2404 command_runner.go:130] ! I0318 13:04:50.670128       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.217467    2404 command_runner.go:130] ! I0318 13:04:50.670230       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.217467    2404 command_runner.go:130] ! I0318 13:05:00.686596       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.217467    2404 command_runner.go:130] ! I0318 13:05:00.686809       1 main.go:227] handling current node
	I0318 13:11:05.217467    2404 command_runner.go:130] ! I0318 13:05:00.686942       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.217580    2404 command_runner.go:130] ! I0318 13:05:00.687116       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:00.687370       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:00.687441       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:10.704297       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:10.704404       1 main.go:227] handling current node
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:10.704426       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:10.704555       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:10.704810       1 main.go:223] Handling node with IPs: map[172.30.136.109:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:10.704878       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.2.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:20.722958       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:20.723127       1 main.go:227] handling current node
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:20.723145       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:20.723159       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:30.731764       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:30.731841       1 main.go:227] handling current node
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:30.731854       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:30.731861       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:30.732029       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:30.732163       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:30.732544       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.30.137.140 Flags: [] Table: 0} 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:40.739849       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:40.739939       1 main.go:227] handling current node
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:40.739953       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:40.739960       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:40.740081       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:40.740151       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:50.748036       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:50.748465       1 main.go:227] handling current node
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:50.748942       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:50.749055       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:50.749287       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:05:50.749413       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:00.757350       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:00.757434       1 main.go:227] handling current node
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:00.757452       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:00.757460       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:00.757853       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:00.758194       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:10.766768       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:10.766886       1 main.go:227] handling current node
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:10.766901       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:10.766910       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:10.767143       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:10.767175       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:20.773530       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:20.773656       1 main.go:227] handling current node
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:20.773729       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.217614    2404 command_runner.go:130] ! I0318 13:06:20.773741       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.218212    2404 command_runner.go:130] ! I0318 13:06:20.774155       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.218212    2404 command_runner.go:130] ! I0318 13:06:20.774478       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.218212    2404 command_runner.go:130] ! I0318 13:06:30.792219       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.218212    2404 command_runner.go:130] ! I0318 13:06:30.792349       1 main.go:227] handling current node
	I0318 13:11:05.218274    2404 command_runner.go:130] ! I0318 13:06:30.792364       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.218274    2404 command_runner.go:130] ! I0318 13:06:30.792373       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.218274    2404 command_runner.go:130] ! I0318 13:06:30.792864       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.218274    2404 command_runner.go:130] ! I0318 13:06:30.792901       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.218274    2404 command_runner.go:130] ! I0318 13:06:40.809219       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.218274    2404 command_runner.go:130] ! I0318 13:06:40.809451       1 main.go:227] handling current node
	I0318 13:11:05.218374    2404 command_runner.go:130] ! I0318 13:06:40.809484       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.218374    2404 command_runner.go:130] ! I0318 13:06:40.809508       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.218450    2404 command_runner.go:130] ! I0318 13:06:40.809841       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:06:40.810075       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:06:50.822556       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:06:50.822612       1 main.go:227] handling current node
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:06:50.822667       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:06:50.822680       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:06:50.822925       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:06:50.823171       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:00.837923       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:00.838008       1 main.go:227] handling current node
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:00.838022       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:00.838030       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:00.838429       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:00.838666       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:10.854207       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:10.854411       1 main.go:227] handling current node
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:10.854444       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:10.854469       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:10.854879       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:10.855094       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:20.861534       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:20.861671       1 main.go:227] handling current node
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:20.861685       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:20.861692       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:20.861818       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:11:05.218478    2404 command_runner.go:130] ! I0318 13:07:20.861845       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:11:05.234629    2404 logs.go:123] Gathering logs for container status ...
	I0318 13:11:05.234629    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:11:05.334578    2404 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0318 13:11:05.334636    2404 command_runner.go:130] > c5d2074be239f       8c811b4aec35f                                                                                         12 seconds ago       Running             busybox                   1                   e20878b8092c2       busybox-5b5d89c9d6-c2997
	I0318 13:11:05.334636    2404 command_runner.go:130] > 3c3bc988c74cd       ead0a4a53df89                                                                                         12 seconds ago       Running             coredns                   1                   97583cc14f115       coredns-5dd5756b68-456tm
	I0318 13:11:05.334731    2404 command_runner.go:130] > eadcf41dad509       6e38f40d628db                                                                                         30 seconds ago       Running             storage-provisioner       2                   41035eff3b7db       storage-provisioner
	I0318 13:11:05.334765    2404 command_runner.go:130] > c8e5ec25e910e       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   86d74dec812cf       kindnet-hhsxh
	I0318 13:11:05.334765    2404 command_runner.go:130] > 46c0cf90d385f       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   41035eff3b7db       storage-provisioner
	I0318 13:11:05.334765    2404 command_runner.go:130] > 163ccabc3882a       83f6cc407eed8                                                                                         About a minute ago   Running             kube-proxy                1                   a9f21749669fe       kube-proxy-mc5tv
	I0318 13:11:05.334821    2404 command_runner.go:130] > 5f0887d1e6913       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      0                   354f3c44a34fc       etcd-multinode-894400
	I0318 13:11:05.334821    2404 command_runner.go:130] > 66ee8be9fada7       e3db313c6dbc0                                                                                         About a minute ago   Running             kube-scheduler            1                   6fb3325d3c100       kube-scheduler-multinode-894400
	I0318 13:11:05.334873    2404 command_runner.go:130] > fc4430c7fa204       7fe0e6f37db33                                                                                         About a minute ago   Running             kube-apiserver            0                   bc7236a19957e       kube-apiserver-multinode-894400
	I0318 13:11:05.334873    2404 command_runner.go:130] > 4ad6784a187d6       d058aa5ab969c                                                                                         About a minute ago   Running             kube-controller-manager   1                   066206d4c52cb       kube-controller-manager-multinode-894400
	I0318 13:11:05.334873    2404 command_runner.go:130] > dd031b5cb1e85       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   a23c1189be7c3       busybox-5b5d89c9d6-c2997
	I0318 13:11:05.334873    2404 command_runner.go:130] > 693a64f7472fd       ead0a4a53df89                                                                                         23 minutes ago       Exited              coredns                   0                   d001e299e996b       coredns-5dd5756b68-456tm
	I0318 13:11:05.334873    2404 command_runner.go:130] > c4d7018ad23a7       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              23 minutes ago       Exited              kindnet-cni               0                   a47b1fb60692c       kindnet-hhsxh
	I0318 13:11:05.334979    2404 command_runner.go:130] > 9335855aab63d       83f6cc407eed8                                                                                         23 minutes ago       Exited              kube-proxy                0                   60e9cd749c8f6       kube-proxy-mc5tv
	I0318 13:11:05.334979    2404 command_runner.go:130] > e4d42739ce0e9       e3db313c6dbc0                                                                                         23 minutes ago       Exited              kube-scheduler            0                   82710777e700c       kube-scheduler-multinode-894400
	I0318 13:11:05.335046    2404 command_runner.go:130] > 7aa5cf4ec378e       d058aa5ab969c                                                                                         23 minutes ago       Exited              kube-controller-manager   0                   5485f509825d9       kube-controller-manager-multinode-894400
	I0318 13:11:07.839606    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods
	I0318 13:11:07.839606    2404 round_trippers.go:469] Request Headers:
	I0318 13:11:07.839606    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:11:07.839606    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:11:07.845393    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:11:07.845510    2404 round_trippers.go:577] Response Headers:
	I0318 13:11:07.845510    2404 round_trippers.go:580]     Audit-Id: d0e9f04a-9114-4987-87c5-0da78416c885
	I0318 13:11:07.845510    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:11:07.845510    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:11:07.845510    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:11:07.845510    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:11:07.845510    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:11:07 GMT
	I0318 13:11:07.847055    2404 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1931"},"items":[{"metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1918","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83055 chars]
	I0318 13:11:07.850800    2404 system_pods.go:59] 12 kube-system pods found
	I0318 13:11:07.850800    2404 system_pods.go:61] "coredns-5dd5756b68-456tm" [1a018c55-846b-4dc2-992c-dc8fd82a6c67] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "etcd-multinode-894400" [d4c040b9-a604-4a0d-80ee-7436541af60c] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "kindnet-hhsxh" [0161d239-2d85-4246-b2fa-6c7374f2ecd6] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "kindnet-k5lpg" [c5e4099b-0611-4ebd-a7a5-ecdbeb168c5b] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "kindnet-zv9tv" [c4d70517-d7fb-4344-b2a4-20e40c13ab53] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "kube-apiserver-multinode-894400" [46152b8e-0bda-427e-a1ad-c79506b56763] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "kube-controller-manager-multinode-894400" [4ad5fc15-53ba-4ebb-9a63-b8572cd9c834] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "kube-proxy-745w9" [d385fe06-f516-440d-b9ed-37c2d4a81050] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "kube-proxy-8bdmn" [5c266b8a-9665-4365-93c6-2b5f1699d3ef] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "kube-proxy-mc5tv" [0afe25f8-cbd6-412b-8698-7b547d1d49ca] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "kube-scheduler-multinode-894400" [f47703ce-5a82-466e-ac8e-ef6b8cc07e6c] Running
	I0318 13:11:07.850800    2404 system_pods.go:61] "storage-provisioner" [219bafbc-d807-44cf-9927-e4957f36ad70] Running
	I0318 13:11:07.850800    2404 system_pods.go:74] duration metric: took 3.7252527s to wait for pod list to return data ...
	I0318 13:11:07.850800    2404 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:11:07.850991    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/default/serviceaccounts
	I0318 13:11:07.851068    2404 round_trippers.go:469] Request Headers:
	I0318 13:11:07.851068    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:11:07.851068    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:11:07.857488    2404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:11:07.857488    2404 round_trippers.go:577] Response Headers:
	I0318 13:11:07.857488    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:11:07.857488    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:11:07.857488    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:11:07.857488    2404 round_trippers.go:580]     Content-Length: 262
	I0318 13:11:07.857488    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:11:07 GMT
	I0318 13:11:07.857488    2404 round_trippers.go:580]     Audit-Id: b40916a2-07b4-4244-b553-de14255fe242
	I0318 13:11:07.857488    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:11:07.857488    2404 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1931"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"17315183-b28f-4dc0-9fbf-c6e55ed5b7f0","resourceVersion":"330","creationTimestamp":"2024-03-18T12:47:41Z"}}]}
	I0318 13:11:07.857488    2404 default_sa.go:45] found service account: "default"
	I0318 13:11:07.857488    2404 default_sa.go:55] duration metric: took 6.6882ms for default service account to be created ...
	I0318 13:11:07.857488    2404 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:11:07.858086    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods
	I0318 13:11:07.858086    2404 round_trippers.go:469] Request Headers:
	I0318 13:11:07.858086    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:11:07.858086    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:11:07.863358    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:11:07.863825    2404 round_trippers.go:577] Response Headers:
	I0318 13:11:07.863825    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:11:07 GMT
	I0318 13:11:07.863825    2404 round_trippers.go:580]     Audit-Id: c094dc2d-1736-42a9-99fa-0e52616eb725
	I0318 13:11:07.863825    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:11:07.863825    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:11:07.863825    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:11:07.863825    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:11:07.865577    2404 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1931"},"items":[{"metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1918","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83055 chars]
	I0318 13:11:07.868850    2404 system_pods.go:86] 12 kube-system pods found
	I0318 13:11:07.868872    2404 system_pods.go:89] "coredns-5dd5756b68-456tm" [1a018c55-846b-4dc2-992c-dc8fd82a6c67] Running
	I0318 13:11:07.868872    2404 system_pods.go:89] "etcd-multinode-894400" [d4c040b9-a604-4a0d-80ee-7436541af60c] Running
	I0318 13:11:07.868872    2404 system_pods.go:89] "kindnet-hhsxh" [0161d239-2d85-4246-b2fa-6c7374f2ecd6] Running
	I0318 13:11:07.868872    2404 system_pods.go:89] "kindnet-k5lpg" [c5e4099b-0611-4ebd-a7a5-ecdbeb168c5b] Running
	I0318 13:11:07.868872    2404 system_pods.go:89] "kindnet-zv9tv" [c4d70517-d7fb-4344-b2a4-20e40c13ab53] Running
	I0318 13:11:07.868872    2404 system_pods.go:89] "kube-apiserver-multinode-894400" [46152b8e-0bda-427e-a1ad-c79506b56763] Running
	I0318 13:11:07.868918    2404 system_pods.go:89] "kube-controller-manager-multinode-894400" [4ad5fc15-53ba-4ebb-9a63-b8572cd9c834] Running
	I0318 13:11:07.868918    2404 system_pods.go:89] "kube-proxy-745w9" [d385fe06-f516-440d-b9ed-37c2d4a81050] Running
	I0318 13:11:07.868918    2404 system_pods.go:89] "kube-proxy-8bdmn" [5c266b8a-9665-4365-93c6-2b5f1699d3ef] Running
	I0318 13:11:07.868918    2404 system_pods.go:89] "kube-proxy-mc5tv" [0afe25f8-cbd6-412b-8698-7b547d1d49ca] Running
	I0318 13:11:07.868918    2404 system_pods.go:89] "kube-scheduler-multinode-894400" [f47703ce-5a82-466e-ac8e-ef6b8cc07e6c] Running
	I0318 13:11:07.868918    2404 system_pods.go:89] "storage-provisioner" [219bafbc-d807-44cf-9927-e4957f36ad70] Running
	I0318 13:11:07.869050    2404 system_pods.go:126] duration metric: took 11.4975ms to wait for k8s-apps to be running ...
	I0318 13:11:07.869050    2404 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:11:07.881488    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:11:07.904911    2404 system_svc.go:56] duration metric: took 35.587ms WaitForService to wait for kubelet
	I0318 13:11:07.904911    2404 kubeadm.go:576] duration metric: took 1m14.2461794s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:11:07.904975    2404 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:11:07.905063    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes
	I0318 13:11:07.905063    2404 round_trippers.go:469] Request Headers:
	I0318 13:11:07.905120    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:11:07.905120    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:11:07.908249    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:11:07.908249    2404 round_trippers.go:577] Response Headers:
	I0318 13:11:07.908249    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:11:07.908249    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:11:07.908249    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:11:07.908249    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:11:07 GMT
	I0318 13:11:07.908249    2404 round_trippers.go:580]     Audit-Id: 5d546d88-7103-4952-a0ed-39d5975946b7
	I0318 13:11:07.908249    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:11:07.909398    2404 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1931"},"items":[{"metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16258 chars]
	I0318 13:11:07.910360    2404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:11:07.910360    2404 node_conditions.go:123] node cpu capacity is 2
	I0318 13:11:07.910430    2404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:11:07.910430    2404 node_conditions.go:123] node cpu capacity is 2
	I0318 13:11:07.910430    2404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:11:07.910430    2404 node_conditions.go:123] node cpu capacity is 2
	I0318 13:11:07.910430    2404 node_conditions.go:105] duration metric: took 5.455ms to run NodePressure ...
	I0318 13:11:07.910430    2404 start.go:240] waiting for startup goroutines ...
	I0318 13:11:07.910430    2404 start.go:245] waiting for cluster config update ...
	I0318 13:11:07.910491    2404 start.go:254] writing updated cluster config ...
	I0318 13:11:07.915142    2404 out.go:177] 
	I0318 13:11:07.918608    2404 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:11:07.925512    2404 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:11:07.925512    2404 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\config.json ...
	I0318 13:11:07.931516    2404 out.go:177] * Starting "multinode-894400-m02" worker node in "multinode-894400" cluster
	I0318 13:11:07.934610    2404 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:11:07.934610    2404 cache.go:56] Caching tarball of preloaded images
	I0318 13:11:07.934610    2404 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 13:11:07.934610    2404 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:11:07.934610    2404 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\config.json ...
	I0318 13:11:07.937263    2404 start.go:360] acquireMachinesLock for multinode-894400-m02: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:11:07.937840    2404 start.go:364] duration metric: took 577.7µs to acquireMachinesLock for "multinode-894400-m02"
	I0318 13:11:07.937999    2404 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:11:07.938044    2404 fix.go:54] fixHost starting: m02
	I0318 13:11:07.938202    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:11:09.994288    2404 main.go:141] libmachine: [stdout =====>] : Off
	
	I0318 13:11:09.994384    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:09.994384    2404 fix.go:112] recreateIfNeeded on multinode-894400-m02: state=Stopped err=<nil>
	W0318 13:11:09.994384    2404 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:11:09.997399    2404 out.go:177] * Restarting existing hyperv VM for "multinode-894400-m02" ...
	I0318 13:11:10.002328    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-894400-m02
	I0318 13:11:12.983495    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:11:12.983495    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:12.983723    2404 main.go:141] libmachine: Waiting for host to start...
	I0318 13:11:12.983723    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:11:15.174567    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:11:15.174567    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:15.174647    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:11:17.575407    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:11:17.576434    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:18.590965    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:11:20.787759    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:11:20.787759    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:20.787759    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:11:23.262491    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:11:23.262491    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:24.268327    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:11:26.368298    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:11:26.369394    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:26.369394    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:11:28.866699    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:11:28.866699    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:29.869672    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:11:31.986818    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:11:31.987538    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:31.987538    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:11:34.398016    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:11:34.398977    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:35.407415    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:11:37.472632    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:11:37.472632    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:37.472736    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:11:39.878741    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:11:39.879447    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:39.883040    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:11:41.925033    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:11:41.925033    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:41.925756    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:11:44.357885    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:11:44.357932    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:44.358091    2404 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\config.json ...
	I0318 13:11:44.360661    2404 machine.go:94] provisionDockerMachine start ...
	I0318 13:11:44.360661    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:11:46.391305    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:11:46.391305    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:46.392143    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:11:48.854189    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:11:48.855236    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:48.861171    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:11:48.861812    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.185 22 <nil> <nil>}
	I0318 13:11:48.861812    2404 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:11:48.982669    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:11:48.982748    2404 buildroot.go:166] provisioning hostname "multinode-894400-m02"
	I0318 13:11:48.982748    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:11:50.983773    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:11:50.984659    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:50.984659    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:11:53.385199    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:11:53.385199    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:53.391565    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:11:53.391702    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.185 22 <nil> <nil>}
	I0318 13:11:53.391702    2404 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-894400-m02 && echo "multinode-894400-m02" | sudo tee /etc/hostname
	I0318 13:11:53.537642    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-894400-m02
	
	I0318 13:11:53.537642    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:11:55.549653    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:11:55.550118    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:55.550196    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:11:57.999231    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:11:58.000224    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:11:58.007457    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:11:58.009541    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.185 22 <nil> <nil>}
	I0318 13:11:58.009541    2404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-894400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-894400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-894400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:11:58.159423    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:11:58.159423    2404 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0318 13:11:58.159423    2404 buildroot.go:174] setting up certificates
	I0318 13:11:58.159423    2404 provision.go:84] configureAuth start
	I0318 13:11:58.159423    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:00.199540    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:00.199540    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:00.199540    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:02.652920    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:02.652920    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:02.653115    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:04.734456    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:04.734456    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:04.734456    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:07.180833    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:07.180833    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:07.181808    2404 provision.go:143] copyHostCerts
	I0318 13:12:07.181961    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0318 13:12:07.182303    2404 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0318 13:12:07.182303    2404 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0318 13:12:07.182726    2404 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 13:12:07.183786    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0318 13:12:07.183852    2404 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0318 13:12:07.183852    2404 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0318 13:12:07.183852    2404 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 13:12:07.185196    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0318 13:12:07.185534    2404 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0318 13:12:07.185615    2404 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0318 13:12:07.185841    2404 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 13:12:07.186897    2404 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-894400-m02 san=[127.0.0.1 172.30.130.185 localhost minikube multinode-894400-m02]
	I0318 13:12:07.408633    2404 provision.go:177] copyRemoteCerts
	I0318 13:12:07.422089    2404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:12:07.422166    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:09.529122    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:09.529158    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:09.529227    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:12.079818    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:12.079818    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:12.080951    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.185 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\id_rsa Username:docker}
	I0318 13:12:12.189055    2404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7668847s)
	I0318 13:12:12.189055    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0318 13:12:12.189055    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:12:12.229946    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0318 13:12:12.230004    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0318 13:12:12.272060    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0318 13:12:12.272060    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 13:12:12.313905    2404 provision.go:87] duration metric: took 14.1543784s to configureAuth
	I0318 13:12:12.313905    2404 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:12:12.314732    2404 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:12:12.314928    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:14.336233    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:14.336813    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:14.336813    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:16.732010    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:16.732010    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:16.738261    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:16.738870    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.185 22 <nil> <nil>}
	I0318 13:12:16.738870    2404 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 13:12:16.868485    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 13:12:16.868544    2404 buildroot.go:70] root file system type: tmpfs
	I0318 13:12:16.868784    2404 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 13:12:16.868784    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:18.909963    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:18.909963    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:18.910779    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:21.361089    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:21.361594    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:21.367439    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:21.367817    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.185 22 <nil> <nil>}
	I0318 13:12:21.368023    2404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.30.130.156"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 13:12:21.525431    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.30.130.156
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 13:12:21.525577    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:23.541908    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:23.542838    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:23.542838    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:25.954793    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:25.955133    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:25.960269    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:25.961002    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.185 22 <nil> <nil>}
	I0318 13:12:25.961002    2404 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 13:12:28.191547    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 13:12:28.191547    2404 machine.go:97] duration metric: took 43.8305662s to provisionDockerMachine
	I0318 13:12:28.191547    2404 start.go:293] postStartSetup for "multinode-894400-m02" (driver="hyperv")
	I0318 13:12:28.191547    2404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:12:28.206808    2404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:12:28.206808    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:30.253637    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:30.253773    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:30.253911    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:32.698451    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:32.699512    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:32.699680    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.185 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\id_rsa Username:docker}
	I0318 13:12:32.797860    2404 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5909651s)
	I0318 13:12:32.808275    2404 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:12:32.815517    2404 command_runner.go:130] > NAME=Buildroot
	I0318 13:12:32.815517    2404 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0318 13:12:32.815517    2404 command_runner.go:130] > ID=buildroot
	I0318 13:12:32.815517    2404 command_runner.go:130] > VERSION_ID=2023.02.9
	I0318 13:12:32.815517    2404 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0318 13:12:32.815517    2404 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:12:32.815517    2404 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0318 13:12:32.815517    2404 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0318 13:12:32.817088    2404 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> 134242.pem in /etc/ssl/certs
	I0318 13:12:32.817161    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /etc/ssl/certs/134242.pem
	I0318 13:12:32.829332    2404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:12:32.846367    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /etc/ssl/certs/134242.pem (1708 bytes)
	I0318 13:12:32.888554    2404 start.go:296] duration metric: took 4.6969725s for postStartSetup
	I0318 13:12:32.888554    2404 fix.go:56] duration metric: took 1m24.9498895s for fixHost
	I0318 13:12:32.888554    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:34.957144    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:34.957338    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:34.957338    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:37.357640    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:37.357640    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:37.362966    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:37.363611    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.185 22 <nil> <nil>}
	I0318 13:12:37.363611    2404 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:12:37.491789    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710767557.485432172
	
	I0318 13:12:37.491863    2404 fix.go:216] guest clock: 1710767557.485432172
	I0318 13:12:37.491863    2404 fix.go:229] Guest: 2024-03-18 13:12:37.485432172 +0000 UTC Remote: 2024-03-18 13:12:32.8885546 +0000 UTC m=+287.993265201 (delta=4.596877572s)
	I0318 13:12:37.491954    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:39.563095    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:39.563908    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:39.564112    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:41.983974    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:41.983974    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:41.990077    2404 main.go:141] libmachine: Using SSH client type: native
	I0318 13:12:41.990077    2404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.130.185 22 <nil> <nil>}
	I0318 13:12:41.990664    2404 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710767557
	I0318 13:12:42.127988    2404 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 13:12:37 UTC 2024
	
	I0318 13:12:42.128107    2404 fix.go:236] clock set: Mon Mar 18 13:12:37 UTC 2024
	 (err=<nil>)
	I0318 13:12:42.128107    2404 start.go:83] releasing machines lock for "multinode-894400-m02", held for 1m34.1895247s
	I0318 13:12:42.128359    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:44.139865    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:44.139865    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:44.140535    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:46.529473    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:46.529721    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:46.532836    2404 out.go:177] * Found network options:
	I0318 13:12:46.535816    2404 out.go:177]   - NO_PROXY=172.30.130.156
	W0318 13:12:46.538163    2404 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 13:12:46.540959    2404 out.go:177]   - NO_PROXY=172.30.130.156
	W0318 13:12:46.543625    2404 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 13:12:46.544879    2404 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 13:12:46.547270    2404 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:12:46.547270    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:46.557261    2404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 13:12:46.557261    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:12:48.658111    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:48.658111    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:48.658111    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:48.658111    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:48.658111    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:48.658111    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:12:51.199167    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:51.199928    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:51.200019    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.185 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\id_rsa Username:docker}
	I0318 13:12:51.218781    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:12:51.218819    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:51.218819    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.185 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\id_rsa Username:docker}
	I0318 13:12:51.362037    2404 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0318 13:12:51.362037    2404 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8147309s)
	I0318 13:12:51.362037    2404 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0318 13:12:51.362037    2404 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8047402s)
	W0318 13:12:51.362037    2404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:12:51.374074    2404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:12:51.400152    2404 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0318 13:12:51.400548    2404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:12:51.400548    2404 start.go:494] detecting cgroup driver to use...
	I0318 13:12:51.400802    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:12:51.433233    2404 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0318 13:12:51.444222    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 13:12:51.474714    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 13:12:51.493971    2404 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 13:12:51.505462    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 13:12:51.535156    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 13:12:51.564076    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 13:12:51.593370    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 13:12:51.625124    2404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:12:51.656333    2404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 13:12:51.686821    2404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:12:51.703903    2404 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0318 13:12:51.715363    2404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:12:51.744976    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:12:51.925119    2404 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 13:12:51.954154    2404 start.go:494] detecting cgroup driver to use...
	I0318 13:12:51.965104    2404 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 13:12:51.988950    2404 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0318 13:12:51.988950    2404 command_runner.go:130] > [Unit]
	I0318 13:12:51.988950    2404 command_runner.go:130] > Description=Docker Application Container Engine
	I0318 13:12:51.988950    2404 command_runner.go:130] > Documentation=https://docs.docker.com
	I0318 13:12:51.988950    2404 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0318 13:12:51.988950    2404 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0318 13:12:51.988950    2404 command_runner.go:130] > StartLimitBurst=3
	I0318 13:12:51.988950    2404 command_runner.go:130] > StartLimitIntervalSec=60
	I0318 13:12:51.988950    2404 command_runner.go:130] > [Service]
	I0318 13:12:51.988950    2404 command_runner.go:130] > Type=notify
	I0318 13:12:51.988950    2404 command_runner.go:130] > Restart=on-failure
	I0318 13:12:51.988950    2404 command_runner.go:130] > Environment=NO_PROXY=172.30.130.156
	I0318 13:12:51.988950    2404 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0318 13:12:51.988950    2404 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0318 13:12:51.988950    2404 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0318 13:12:51.988950    2404 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0318 13:12:51.988950    2404 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0318 13:12:51.988950    2404 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0318 13:12:51.988950    2404 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0318 13:12:51.988950    2404 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0318 13:12:51.988950    2404 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0318 13:12:51.988950    2404 command_runner.go:130] > ExecStart=
	I0318 13:12:51.988950    2404 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0318 13:12:51.988950    2404 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0318 13:12:51.988950    2404 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0318 13:12:51.988950    2404 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0318 13:12:51.988950    2404 command_runner.go:130] > LimitNOFILE=infinity
	I0318 13:12:51.988950    2404 command_runner.go:130] > LimitNPROC=infinity
	I0318 13:12:51.989810    2404 command_runner.go:130] > LimitCORE=infinity
	I0318 13:12:51.989810    2404 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0318 13:12:51.989810    2404 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0318 13:12:51.989810    2404 command_runner.go:130] > TasksMax=infinity
	I0318 13:12:51.989810    2404 command_runner.go:130] > TimeoutStartSec=0
	I0318 13:12:51.989810    2404 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0318 13:12:51.989810    2404 command_runner.go:130] > Delegate=yes
	I0318 13:12:51.989810    2404 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0318 13:12:51.989810    2404 command_runner.go:130] > KillMode=process
	I0318 13:12:51.989810    2404 command_runner.go:130] > [Install]
	I0318 13:12:51.989810    2404 command_runner.go:130] > WantedBy=multi-user.target
	I0318 13:12:52.004160    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:12:52.038451    2404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:12:52.078410    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:12:52.117147    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 13:12:52.155960    2404 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 13:12:52.224473    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 13:12:52.246925    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:12:52.280923    2404 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0318 13:12:52.296152    2404 ssh_runner.go:195] Run: which cri-dockerd
	I0318 13:12:52.301950    2404 command_runner.go:130] > /usr/bin/cri-dockerd
	I0318 13:12:52.316847    2404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 13:12:52.336256    2404 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 13:12:52.378754    2404 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 13:12:52.577690    2404 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 13:12:52.751724    2404 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 13:12:52.751883    2404 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 13:12:52.796897    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:12:52.993289    2404 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 13:12:55.546613    2404 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5527863s)
	I0318 13:12:55.557642    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 13:12:55.596056    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 13:12:55.627123    2404 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 13:12:55.814803    2404 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 13:12:56.000980    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:12:56.175890    2404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 13:12:56.214338    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 13:12:56.251700    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:12:56.427833    2404 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 13:12:56.523350    2404 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 13:12:56.534195    2404 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 13:12:56.542185    2404 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0318 13:12:56.542185    2404 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0318 13:12:56.542185    2404 command_runner.go:130] > Device: 0,22	Inode: 846         Links: 1
	I0318 13:12:56.542185    2404 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0318 13:12:56.542185    2404 command_runner.go:130] > Access: 2024-03-18 13:12:56.431716269 +0000
	I0318 13:12:56.542185    2404 command_runner.go:130] > Modify: 2024-03-18 13:12:56.431716269 +0000
	I0318 13:12:56.542185    2404 command_runner.go:130] > Change: 2024-03-18 13:12:56.435716249 +0000
	I0318 13:12:56.542185    2404 command_runner.go:130] >  Birth: -
	I0318 13:12:56.542185    2404 start.go:562] Will wait 60s for crictl version
	I0318 13:12:56.552172    2404 ssh_runner.go:195] Run: which crictl
	I0318 13:12:56.557810    2404 command_runner.go:130] > /usr/bin/crictl
	I0318 13:12:56.569418    2404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:12:56.636628    2404 command_runner.go:130] > Version:  0.1.0
	I0318 13:12:56.636628    2404 command_runner.go:130] > RuntimeName:  docker
	I0318 13:12:56.636628    2404 command_runner.go:130] > RuntimeVersion:  25.0.4
	I0318 13:12:56.636628    2404 command_runner.go:130] > RuntimeApiVersion:  v1
	I0318 13:12:56.636628    2404 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0318 13:12:56.646742    2404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 13:12:56.677837    2404 command_runner.go:130] > 25.0.4
	I0318 13:12:56.685878    2404 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 13:12:56.714384    2404 command_runner.go:130] > 25.0.4
	I0318 13:12:56.719399    2404 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0318 13:12:56.721414    2404 out.go:177]   - env NO_PROXY=172.30.130.156
	I0318 13:12:56.724427    2404 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0318 13:12:56.728385    2404 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0318 13:12:56.728385    2404 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0318 13:12:56.728385    2404 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0318 13:12:56.728385    2404 ip.go:207] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:93:df:b5 Flags:up|broadcast|multicast|running}
	I0318 13:12:56.731375    2404 ip.go:210] interface addr: fe80::572b:6b1d:9130:e88b/64
	I0318 13:12:56.731375    2404 ip.go:210] interface addr: 172.30.128.1/20
	I0318 13:12:56.741373    2404 ssh_runner.go:195] Run: grep 172.30.128.1	host.minikube.internal$ /etc/hosts
	I0318 13:12:56.747824    2404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.30.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:12:56.767982    2404 mustload.go:65] Loading cluster: multinode-894400
	I0318 13:12:56.768521    2404 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:12:56.769261    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:12:58.828481    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:12:58.829214    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:12:58.829214    2404 host.go:66] Checking if "multinode-894400" exists ...
	I0318 13:12:58.829874    2404 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400 for IP: 172.30.130.185
	I0318 13:12:58.829874    2404 certs.go:194] generating shared ca certs ...
	I0318 13:12:58.829874    2404 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:12:58.830607    2404 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0318 13:12:58.830607    2404 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0318 13:12:58.831177    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 13:12:58.831484    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0318 13:12:58.831484    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 13:12:58.831484    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 13:12:58.832235    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem (1338 bytes)
	W0318 13:12:58.832235    2404 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424_empty.pem, impossibly tiny 0 bytes
	I0318 13:12:58.832235    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0318 13:12:58.832966    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0318 13:12:58.833273    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0318 13:12:58.833573    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0318 13:12:58.834032    2404 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem (1708 bytes)
	I0318 13:12:58.834310    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:12:58.834474    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem -> /usr/share/ca-certificates/13424.pem
	I0318 13:12:58.834674    2404 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> /usr/share/ca-certificates/134242.pem
	I0318 13:12:58.834929    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:12:58.880571    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0318 13:12:58.923341    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:12:58.964738    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:12:59.007898    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:12:59.049852    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13424.pem --> /usr/share/ca-certificates/13424.pem (1338 bytes)
	I0318 13:12:59.094314    2404 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /usr/share/ca-certificates/134242.pem (1708 bytes)
	I0318 13:12:59.150670    2404 ssh_runner.go:195] Run: openssl version
	I0318 13:12:59.159202    2404 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0318 13:12:59.170544    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:12:59.206253    2404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:12:59.213194    2404 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 18 11:07 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:12:59.213194    2404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 11:07 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:12:59.224821    2404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:12:59.232869    2404 command_runner.go:130] > b5213941
	I0318 13:12:59.245647    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:12:59.274388    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13424.pem && ln -fs /usr/share/ca-certificates/13424.pem /etc/ssl/certs/13424.pem"
	I0318 13:12:59.303443    2404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13424.pem
	I0318 13:12:59.310133    2404 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 18 11:25 /usr/share/ca-certificates/13424.pem
	I0318 13:12:59.310225    2404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 11:25 /usr/share/ca-certificates/13424.pem
	I0318 13:12:59.320396    2404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13424.pem
	I0318 13:12:59.328062    2404 command_runner.go:130] > 51391683
	I0318 13:12:59.340345    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13424.pem /etc/ssl/certs/51391683.0"
	I0318 13:12:59.371208    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134242.pem && ln -fs /usr/share/ca-certificates/134242.pem /etc/ssl/certs/134242.pem"
	I0318 13:12:59.406284    2404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134242.pem
	I0318 13:12:59.411851    2404 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 18 11:25 /usr/share/ca-certificates/134242.pem
	I0318 13:12:59.412491    2404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 11:25 /usr/share/ca-certificates/134242.pem
	I0318 13:12:59.423279    2404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134242.pem
	I0318 13:12:59.431301    2404 command_runner.go:130] > 3ec20f2e
	I0318 13:12:59.442654    2404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134242.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:12:59.473181    2404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:12:59.478195    2404 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 13:12:59.479073    2404 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 13:12:59.479073    2404 kubeadm.go:928] updating node {m02 172.30.130.185 8443 v1.28.4 docker false true} ...
	I0318 13:12:59.479615    2404 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-894400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.130.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-894400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:12:59.490457    2404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:12:59.506642    2404 command_runner.go:130] > kubeadm
	I0318 13:12:59.506642    2404 command_runner.go:130] > kubectl
	I0318 13:12:59.506642    2404 command_runner.go:130] > kubelet
	I0318 13:12:59.506741    2404 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:12:59.518054    2404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0318 13:12:59.534962    2404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0318 13:12:59.565535    2404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:12:59.604548    2404 ssh_runner.go:195] Run: grep 172.30.130.156	control-plane.minikube.internal$ /etc/hosts
	I0318 13:12:59.610622    2404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.30.130.156	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:12:59.639384    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:12:59.826223    2404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:12:59.855586    2404 host.go:66] Checking if "multinode-894400" exists ...
	I0318 13:12:59.856348    2404 start.go:316] joinCluster: &{Name:multinode-894400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-894400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.130.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.30.130.185 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.30.137.140 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:12:59.856574    2404 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.30.130.185 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0318 13:12:59.856574    2404 host.go:66] Checking if "multinode-894400-m02" exists ...
	I0318 13:12:59.857189    2404 mustload.go:65] Loading cluster: multinode-894400
	I0318 13:12:59.857744    2404 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:12:59.858090    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:13:01.946469    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:13:01.946580    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:13:01.946580    2404 host.go:66] Checking if "multinode-894400" exists ...
	I0318 13:13:01.947273    2404 api_server.go:166] Checking apiserver status ...
	I0318 13:13:01.958213    2404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:13:01.958213    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:13:04.029455    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:13:04.029815    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:13:04.029815    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:13:06.500412    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:13:06.500412    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:13:06.500412    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\id_rsa Username:docker}
	I0318 13:13:06.612960    2404 command_runner.go:130] > 1904
	I0318 13:13:06.613322    2404 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.6550744s)
	I0318 13:13:06.624008    2404 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1904/cgroup
	W0318 13:13:06.640979    2404 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1904/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:13:06.652797    2404 ssh_runner.go:195] Run: ls
	I0318 13:13:06.659353    2404 api_server.go:253] Checking apiserver healthz at https://172.30.130.156:8443/healthz ...
	I0318 13:13:06.668845    2404 api_server.go:279] https://172.30.130.156:8443/healthz returned 200:
	ok
	I0318 13:13:06.679968    2404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-894400-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0318 13:13:06.804006    2404 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-k5lpg, kube-system/kube-proxy-8bdmn
	I0318 13:13:09.844606    2404 command_runner.go:130] > node/multinode-894400-m02 cordoned
	I0318 13:13:09.844687    2404 command_runner.go:130] > pod "busybox-5b5d89c9d6-8btgf" has DeletionTimestamp older than 1 seconds, skipping
	I0318 13:13:09.844687    2404 command_runner.go:130] > node/multinode-894400-m02 drained
	I0318 13:13:09.844687    2404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-894400-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.1646965s)
	I0318 13:13:09.844687    2404 node.go:128] successfully drained node "multinode-894400-m02"
	I0318 13:13:09.844687    2404 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0318 13:13:09.844687    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:13:11.932917    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:13:11.933133    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:13:11.933309    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:13:14.426217    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.185
	
	I0318 13:13:14.426931    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:13:14.427222    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.185 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\id_rsa Username:docker}
	I0318 13:13:14.828886    2404 command_runner.go:130] ! W0318 13:13:14.810383    1551 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0318 13:13:15.423703    2404 command_runner.go:130] ! W0318 13:13:15.404141    1551 cleanupnode.go:99] [reset] Failed to remove containers: failed to stop running pod d31120bfd5cc1a38da24c03574ff5be355cc2afa037bec7fa98bc10c7a2fdb1f: output: E0318 13:13:15.079506    1622 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-5b5d89c9d6-8btgf_default\" network: cni config uninitialized" podSandboxID="d31120bfd5cc1a38da24c03574ff5be355cc2afa037bec7fa98bc10c7a2fdb1f"
	I0318 13:13:15.423703    2404 command_runner.go:130] ! time="2024-03-18T13:13:15Z" level=fatal msg="stopping the pod sandbox \"d31120bfd5cc1a38da24c03574ff5be355cc2afa037bec7fa98bc10c7a2fdb1f\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-5b5d89c9d6-8btgf_default\" network: cni config uninitialized"
	I0318 13:13:15.423703    2404 command_runner.go:130] ! : exit status 1
	I0318 13:13:15.446299    2404 command_runner.go:130] > [preflight] Running pre-flight checks
	I0318 13:13:15.446299    2404 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0318 13:13:15.446299    2404 command_runner.go:130] > [reset] Stopping the kubelet service
	I0318 13:13:15.446299    2404 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0318 13:13:15.446299    2404 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0318 13:13:15.446299    2404 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0318 13:13:15.446299    2404 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0318 13:13:15.446299    2404 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0318 13:13:15.446299    2404 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0318 13:13:15.446299    2404 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0318 13:13:15.446299    2404 command_runner.go:130] > to reset your system's IPVS tables.
	I0318 13:13:15.446299    2404 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0318 13:13:15.446299    2404 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0318 13:13:15.446299    2404 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (5.6015704s)
	I0318 13:13:15.446299    2404 node.go:155] successfully reset node "multinode-894400-m02"
	I0318 13:13:15.447860    2404 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 13:13:15.448598    2404 kapi.go:59] client config for multinode-894400: &rest.Config{Host:"https://172.30.130.156:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-894400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-894400\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e8b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 13:13:15.449982    2404 cert_rotation.go:137] Starting client certificate rotation controller
	I0318 13:13:15.449982    2404 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0318 13:13:15.449982    2404 round_trippers.go:463] DELETE https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:15.449982    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:15.449982    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:15.449982    2404 round_trippers.go:473]     Content-Type: application/json
	I0318 13:13:15.449982    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:15.466653    2404 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0318 13:13:15.466653    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:15.466653    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:15.466653    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:15.466653    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:15.466653    2404 round_trippers.go:580]     Content-Length: 171
	I0318 13:13:15.466653    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:15 GMT
	I0318 13:13:15.466653    2404 round_trippers.go:580]     Audit-Id: 0dfa8ab5-c803-46df-986b-ecc0de7665e3
	I0318 13:13:15.466653    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:15.466653    2404 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-894400-m02","kind":"nodes","uid":"714c6de4-85c9-4721-aea6-ad8f2be4e14c"}}
	I0318 13:13:15.467476    2404 node.go:180] successfully deleted node "multinode-894400-m02"
	I0318 13:13:15.467476    2404 start.go:333] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.30.130.185 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0318 13:13:15.467476    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 13:13:15.467732    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:13:17.515460    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:13:17.516450    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:13:17.516564    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:13:19.936774    2404 main.go:141] libmachine: [stdout =====>] : 172.30.130.156
	
	I0318 13:13:19.936774    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:13:19.937861    2404 sshutil.go:53] new ssh client: &{IP:172.30.130.156 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\id_rsa Username:docker}
	I0318 13:13:20.132348    2404 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token h3emo0.3od1rtlfoqng84m0 --discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 
	I0318 13:13:20.132392    2404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.6647415s)
	I0318 13:13:20.132392    2404 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.30.130.185 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0318 13:13:20.132392    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token h3emo0.3od1rtlfoqng84m0 --discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-894400-m02"
	I0318 13:13:20.374892    2404 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:13:23.193670    2404 command_runner.go:130] > [preflight] Running pre-flight checks
	I0318 13:13:23.194487    2404 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0318 13:13:23.194537    2404 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0318 13:13:23.194537    2404 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:13:23.194537    2404 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:13:23.194537    2404 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0318 13:13:23.194613    2404 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0318 13:13:23.194613    2404 command_runner.go:130] > This node has joined the cluster:
	I0318 13:13:23.194613    2404 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0318 13:13:23.194613    2404 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0318 13:13:23.194613    2404 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0318 13:13:23.194697    2404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token h3emo0.3od1rtlfoqng84m0 --discovery-token-ca-cert-hash sha256:0e85bffcabbe1ecc87b47edfa4897ded335ea6d7a4ea5ae94c3523fb207c8760 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-894400-m02": (3.0622822s)
	I0318 13:13:23.194733    2404 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 13:13:23.415982    2404 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0318 13:13:23.631094    2404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-894400-m02 minikube.k8s.io/updated_at=2024_03_18T13_13_23_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=multinode-894400 minikube.k8s.io/primary=false
	I0318 13:13:23.787652    2404 command_runner.go:130] > node/multinode-894400-m02 labeled
	I0318 13:13:23.790754    2404 start.go:318] duration metric: took 23.9342309s to joinCluster
	I0318 13:13:23.790754    2404 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.30.130.185 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0318 13:13:23.796901    2404 out.go:177] * Verifying Kubernetes components...
	I0318 13:13:23.791729    2404 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:13:23.811059    2404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:13:24.076966    2404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:13:24.106074    2404 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 13:13:24.107006    2404 kapi.go:59] client config for multinode-894400: &rest.Config{Host:"https://172.30.130.156:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-894400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-894400\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e8b2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 13:13:24.107807    2404 node_ready.go:35] waiting up to 6m0s for node "multinode-894400-m02" to be "Ready" ...
	I0318 13:13:24.107985    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:24.107985    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:24.107985    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:24.107985    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:24.111791    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:24.111917    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:24.111917    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:24.111917    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:24.111917    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:24.111917    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:24 GMT
	I0318 13:13:24.111917    2404 round_trippers.go:580]     Audit-Id: bad236c7-32fe-4927-8933-37c6bd2bb3ea
	I0318 13:13:24.111917    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:24.112182    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2075","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3689 chars]
	I0318 13:13:24.609602    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:24.609602    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:24.609774    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:24.609774    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:24.613323    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:24.614345    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:24.614345    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:24.614406    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:24.614406    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:24.614455    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:24 GMT
	I0318 13:13:24.614455    2404 round_trippers.go:580]     Audit-Id: c2a8c207-329b-4fa3-b15c-c9cb136f3048
	I0318 13:13:24.614455    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:24.614839    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2075","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3689 chars]
	I0318 13:13:25.122131    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:25.122131    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:25.122131    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:25.122131    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:25.125348    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:25.126523    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:25.126523    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:25 GMT
	I0318 13:13:25.126523    2404 round_trippers.go:580]     Audit-Id: 3117168d-ce41-4647-8887-064e01f164f7
	I0318 13:13:25.126523    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:25.126523    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:25.126641    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:25.126641    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:25.126764    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2075","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3689 chars]
	I0318 13:13:25.621917    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:25.621917    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:25.621917    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:25.621917    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:25.625501    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:25.625501    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:25.625501    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:25.625501    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:25 GMT
	I0318 13:13:25.625501    2404 round_trippers.go:580]     Audit-Id: 0601b344-e90b-4873-b61f-8357cbe3824b
	I0318 13:13:25.625501    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:25.625501    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:25.625501    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:25.626456    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2075","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3689 chars]
	I0318 13:13:26.122925    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:26.122925    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:26.122925    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:26.122925    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:26.127852    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:26.127852    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:26.127852    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:26.127923    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:26.127923    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:26 GMT
	I0318 13:13:26.127923    2404 round_trippers.go:580]     Audit-Id: 44074f95-811a-4e0c-8124-ef0c0f696af2
	I0318 13:13:26.127923    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:26.127923    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:26.127923    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2075","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3689 chars]
	I0318 13:13:26.127923    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:26.610764    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:26.610764    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:26.610764    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:26.610764    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:26.617844    2404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 13:13:26.617844    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:26.617844    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:26.617844    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:26 GMT
	I0318 13:13:26.617844    2404 round_trippers.go:580]     Audit-Id: c2d68481-9ab5-4b4a-9c2a-11d1dff8736d
	I0318 13:13:26.617844    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:26.617844    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:26.617844    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:26.617844    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2075","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3689 chars]
	I0318 13:13:27.109206    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:27.109206    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:27.109206    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:27.109206    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:27.112801    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:27.112801    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:27.112801    2404 round_trippers.go:580]     Audit-Id: 5ae9fe3b-71ae-4a61-9303-eaa1d2a8a680
	I0318 13:13:27.113012    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:27.113012    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:27.113012    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:27.113012    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:27.113012    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:27 GMT
	I0318 13:13:27.113264    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:27.613475    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:27.613785    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:27.613785    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:27.613863    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:27.618291    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:27.618291    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:27.618291    2404 round_trippers.go:580]     Audit-Id: 70ba5047-e870-4a57-b71e-b01cb28d82bd
	I0318 13:13:27.618291    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:27.618291    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:27.618291    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:27.618291    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:27.618291    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:27 GMT
	I0318 13:13:27.618291    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:28.115677    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:28.115811    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:28.115811    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:28.115811    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:28.119547    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:28.120169    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:28.120169    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:28.120169    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:28 GMT
	I0318 13:13:28.120169    2404 round_trippers.go:580]     Audit-Id: 2b446069-c50e-4406-95f0-3498f32767e0
	I0318 13:13:28.120169    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:28.120169    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:28.120169    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:28.120169    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:28.616315    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:28.616315    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:28.616315    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:28.616315    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:28.621060    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:28.621060    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:28.621060    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:28.621060    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:28 GMT
	I0318 13:13:28.621060    2404 round_trippers.go:580]     Audit-Id: fafb9b4a-fe09-4837-9a0f-d13954a01d59
	I0318 13:13:28.621060    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:28.621060    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:28.621060    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:28.621060    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:28.621835    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:29.113875    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:29.113875    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:29.113875    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:29.113875    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:29.117497    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:29.117497    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:29.117497    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:29.117497    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:29.117497    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:29.117497    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:29 GMT
	I0318 13:13:29.117497    2404 round_trippers.go:580]     Audit-Id: f5de7786-47fe-4d19-b5dd-52c684437af5
	I0318 13:13:29.117969    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:29.118770    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:29.613909    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:29.613909    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:29.613909    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:29.613909    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:29.617534    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:29.617534    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:29.617534    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:29.617534    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:29 GMT
	I0318 13:13:29.617534    2404 round_trippers.go:580]     Audit-Id: 9d413313-522f-47a1-b82f-34955fa29ee3
	I0318 13:13:29.617534    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:29.617534    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:29.617534    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:29.618868    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:30.114768    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:30.114768    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:30.114768    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:30.114768    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:30.118263    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:30.119024    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:30.119024    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:30.119024    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:30 GMT
	I0318 13:13:30.119024    2404 round_trippers.go:580]     Audit-Id: 2482aebf-d42c-41fa-b462-8cc0ef08d122
	I0318 13:13:30.119024    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:30.119024    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:30.119024    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:30.119328    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:30.616342    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:30.616569    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:30.616569    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:30.616569    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:30.622663    2404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:13:30.622663    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:30.622663    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:30.622663    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:30.622663    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:30.622663    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:30 GMT
	I0318 13:13:30.622663    2404 round_trippers.go:580]     Audit-Id: 05d1fce0-5387-41d5-a701-89f5b95394f7
	I0318 13:13:30.622663    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:30.622663    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:30.623316    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:31.119648    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:31.119710    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:31.119710    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:31.119710    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:31.122529    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:13:31.122529    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:31.123469    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:31.123469    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:31.123509    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:31.123509    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:31.123509    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:31 GMT
	I0318 13:13:31.123509    2404 round_trippers.go:580]     Audit-Id: cb031b54-5c56-4d99-a5fd-4b6a1bc58270
	I0318 13:13:31.123649    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:31.619906    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:31.619906    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:31.619906    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:31.619906    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:31.623897    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:31.624267    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:31.624331    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:31.624355    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:31.624355    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:31 GMT
	I0318 13:13:31.624355    2404 round_trippers.go:580]     Audit-Id: 67715e89-b45f-486a-9d1b-138cdef26ef1
	I0318 13:13:31.624355    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:31.624355    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:31.624511    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:32.121948    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:32.121948    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:32.121948    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:32.121948    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:32.127199    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:13:32.127199    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:32.127199    2404 round_trippers.go:580]     Audit-Id: 44222899-2f66-4a76-b163-47b47757dd66
	I0318 13:13:32.127199    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:32.127199    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:32.127199    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:32.127199    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:32.127199    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:32 GMT
	I0318 13:13:32.127743    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:32.622765    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:32.622956    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:32.622956    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:32.622956    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:32.627711    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:32.627711    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:32.627711    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:32.627711    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:32.627711    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:32.627711    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:32 GMT
	I0318 13:13:32.627711    2404 round_trippers.go:580]     Audit-Id: 514238c1-0e4a-4324-b80b-60e8e1760dfe
	I0318 13:13:32.627711    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:32.628400    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:32.628564    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:33.121422    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:33.121479    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:33.121479    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:33.121479    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:33.124951    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:33.125216    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:33.125216    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:33.125216    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:33 GMT
	I0318 13:13:33.125216    2404 round_trippers.go:580]     Audit-Id: 718cd416-cf7d-4086-bf57-da12aaa07725
	I0318 13:13:33.125216    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:33.125216    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:33.125216    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:33.125216    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2091","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3798 chars]
	I0318 13:13:33.621412    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:33.621412    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:33.621412    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:33.621412    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:33.625536    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:33.625871    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:33.625871    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:33.625871    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:33.625871    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:33.625871    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:33 GMT
	I0318 13:13:33.625871    2404 round_trippers.go:580]     Audit-Id: 616d2d35-a081-41af-8e98-11cf91e587d9
	I0318 13:13:33.625871    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:33.626140    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:34.122880    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:34.122880    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:34.123229    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:34.123229    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:34.127097    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:34.128177    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:34.128177    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:34.128271    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:34.128271    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:34 GMT
	I0318 13:13:34.128271    2404 round_trippers.go:580]     Audit-Id: f5dac830-65f6-4cbc-8b5d-d4e4678d11f4
	I0318 13:13:34.128271    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:34.128271    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:34.128373    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:34.609503    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:34.609609    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:34.609609    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:34.609609    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:34.612540    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:13:34.613314    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:34.613314    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:34.613314    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:34.613314    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:34.613314    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:34 GMT
	I0318 13:13:34.613314    2404 round_trippers.go:580]     Audit-Id: 41cfc544-6c0b-4a52-9224-408ac4574eef
	I0318 13:13:34.613314    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:34.613551    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:35.110550    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:35.110550    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:35.110550    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:35.110550    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:35.115218    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:35.115218    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:35.115218    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:35.115990    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:35 GMT
	I0318 13:13:35.115990    2404 round_trippers.go:580]     Audit-Id: 19515f85-26c7-45f1-8ec1-21ad333498e2
	I0318 13:13:35.115990    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:35.115990    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:35.115990    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:35.116236    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:35.116298    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:35.616647    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:35.616647    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:35.616727    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:35.616727    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:35.621075    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:35.621075    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:35.621075    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:35.621075    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:35.621461    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:35.621461    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:35.621461    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:35 GMT
	I0318 13:13:35.621461    2404 round_trippers.go:580]     Audit-Id: 3eee8fd8-677f-4eaa-9b8a-3fefcc79376e
	I0318 13:13:35.621638    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:36.115751    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:36.115751    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:36.115751    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:36.115751    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:36.119366    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:36.119366    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:36.119366    2404 round_trippers.go:580]     Audit-Id: 6b7d0bc0-35a8-45ea-b614-03006af9b462
	I0318 13:13:36.120267    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:36.120267    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:36.120267    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:36.120267    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:36.120267    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:36 GMT
	I0318 13:13:36.120469    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:36.614750    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:36.615098    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:36.615098    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:36.615098    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:36.618473    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:36.619301    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:36.619301    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:36.619301    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:36.619301    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:36.619301    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:36 GMT
	I0318 13:13:36.619301    2404 round_trippers.go:580]     Audit-Id: 862c2b71-f953-4c63-8ab2-9992de76b53c
	I0318 13:13:36.619301    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:36.619420    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:37.117864    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:37.118162    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:37.118233    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:37.118233    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:37.122044    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:13:37.122044    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:37.122044    2404 round_trippers.go:580]     Audit-Id: 83ac0fb8-2102-4e7a-becf-ea9389ef8992
	I0318 13:13:37.122044    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:37.122044    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:37.122044    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:37.122044    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:37.122044    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:37 GMT
	I0318 13:13:37.122378    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:37.122378    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:37.617772    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:37.617772    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:37.617772    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:37.617772    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:37.622386    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:37.622443    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:37.622443    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:37.622443    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:37 GMT
	I0318 13:13:37.622443    2404 round_trippers.go:580]     Audit-Id: c0cf4bf2-6f45-46e7-80d5-3d2028ec8c65
	I0318 13:13:37.622443    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:37.622443    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:37.622443    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:37.622443    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:38.119675    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:38.119985    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:38.119985    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:38.119985    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:38.123358    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:38.123923    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:38.123923    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:38.123923    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:38 GMT
	I0318 13:13:38.123923    2404 round_trippers.go:580]     Audit-Id: e739a76b-c25c-464c-a6ca-c75352a3eb07
	I0318 13:13:38.123923    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:38.123923    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:38.123923    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:38.124133    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:38.622815    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:38.622815    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:38.622815    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:38.622815    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:38.627163    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:38.627163    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:38.627163    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:38.627283    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:38.627283    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:38 GMT
	I0318 13:13:38.627283    2404 round_trippers.go:580]     Audit-Id: 190b9418-ab0d-4758-8239-2e91b0350583
	I0318 13:13:38.627283    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:38.627283    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:38.627512    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:39.112977    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:39.113027    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:39.113027    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:39.113061    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:39.117941    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:39.117996    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:39.118075    2404 round_trippers.go:580]     Audit-Id: b903b81c-0ae3-425a-a3a8-f1d6925a4168
	I0318 13:13:39.118075    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:39.118075    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:39.118075    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:39.118075    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:39.118075    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:39 GMT
	I0318 13:13:39.118245    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:39.621617    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:39.621617    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:39.621693    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:39.621693    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:39.625394    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:39.625394    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:39.625517    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:39.625517    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:39.625517    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:39.625517    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:39.625517    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:39 GMT
	I0318 13:13:39.625517    2404 round_trippers.go:580]     Audit-Id: 3acd5a4c-9d63-450b-9518-436d8e7d1d56
	I0318 13:13:39.625631    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:39.625631    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:40.123388    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:40.123570    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:40.123570    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:40.123570    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:40.126929    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:40.127361    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:40.127361    2404 round_trippers.go:580]     Audit-Id: 7a8be5f9-3424-448b-80b5-314ba785c370
	I0318 13:13:40.127361    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:40.127361    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:40.127361    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:40.127361    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:40.127361    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:40 GMT
	I0318 13:13:40.127506    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:40.610897    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:40.610897    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:40.610897    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:40.610897    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:40.614728    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:40.614728    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:40.614728    2404 round_trippers.go:580]     Audit-Id: e3b35c00-4b24-409c-a3c3-892a92806813
	I0318 13:13:40.614728    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:40.614728    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:40.614728    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:40.614728    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:40.614728    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:40 GMT
	I0318 13:13:40.615719    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:41.110727    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:41.110801    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:41.110801    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:41.110801    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:41.116452    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:13:41.117323    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:41.117323    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:41.117323    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:41.117517    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:41.117538    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:41 GMT
	I0318 13:13:41.117538    2404 round_trippers.go:580]     Audit-Id: e018457c-3925-4e36-bdbd-f7443bc127c1
	I0318 13:13:41.117538    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:41.117718    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:41.611456    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:41.611553    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:41.611553    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:41.611553    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:41.613902    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:13:41.613902    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:41.613902    2404 round_trippers.go:580]     Audit-Id: eb9c7f10-7928-4ab8-a087-bfa9017c40e2
	I0318 13:13:41.613902    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:41.613902    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:41.613902    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:41.613902    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:41.613902    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:41 GMT
	I0318 13:13:41.614689    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:42.114274    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:42.114659    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:42.114659    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:42.114659    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:42.119245    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:42.119245    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:42.119245    2404 round_trippers.go:580]     Audit-Id: 42ad8670-62e4-4707-ae8c-1541b2be3895
	I0318 13:13:42.119245    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:42.119245    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:42.119245    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:42.119245    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:42.119245    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:42 GMT
	I0318 13:13:42.119245    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:42.120087    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:42.616670    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:42.616670    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:42.616670    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:42.616670    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:42.622040    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:13:42.622652    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:42.622652    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:42.622652    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:42.622652    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:42 GMT
	I0318 13:13:42.622652    2404 round_trippers.go:580]     Audit-Id: 2e99fa7d-5c6b-4c0e-b0d6-11340a5119b6
	I0318 13:13:42.622652    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:42.622652    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:42.622845    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:43.116411    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:43.116411    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:43.116499    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:43.116499    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:43.118993    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:13:43.118993    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:43.118993    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:43.118993    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:43 GMT
	I0318 13:13:43.118993    2404 round_trippers.go:580]     Audit-Id: 80640aba-3ce5-406e-b596-70c04dd55bed
	I0318 13:13:43.119462    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:43.119462    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:43.119462    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:43.119636    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:43.617543    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:43.617543    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:43.617543    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:43.617543    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:43.621804    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:43.621804    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:43.621804    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:43.622825    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:43.622856    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:43.622856    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:43 GMT
	I0318 13:13:43.622856    2404 round_trippers.go:580]     Audit-Id: f752aa2f-ef65-437c-947a-fcaad58e9b0e
	I0318 13:13:43.622856    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:43.623131    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:44.119455    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:44.119693    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:44.119693    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:44.119693    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:44.129736    2404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 13:13:44.129736    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:44.129736    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:44.129736    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:44 GMT
	I0318 13:13:44.129736    2404 round_trippers.go:580]     Audit-Id: 377b99f8-c303-4aa0-ac63-b6da62fd5282
	I0318 13:13:44.129736    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:44.129736    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:44.129736    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:44.129946    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:44.130399    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:44.621784    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:44.621784    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:44.621881    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:44.621881    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:44.625236    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:44.626129    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:44.626129    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:44.626129    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:44 GMT
	I0318 13:13:44.626129    2404 round_trippers.go:580]     Audit-Id: bd41e20b-240a-4849-b7bc-6b40689ab12d
	I0318 13:13:44.626129    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:44.626129    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:44.626129    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:44.626282    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:45.121055    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:45.121055    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:45.121055    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:45.121055    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:45.125681    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:45.125802    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:45.125802    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:45.125802    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:45.125802    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:45.125802    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:45 GMT
	I0318 13:13:45.125802    2404 round_trippers.go:580]     Audit-Id: 2ef43652-1c83-4fd5-a143-9c63154c5196
	I0318 13:13:45.125802    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:45.126007    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:45.609779    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:45.609779    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:45.609779    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:45.609779    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:45.614176    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:45.614176    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:45.614176    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:45.614269    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:45.614269    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:45.614269    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:45.614269    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:45 GMT
	I0318 13:13:45.614269    2404 round_trippers.go:580]     Audit-Id: 765ad066-a458-49dd-a27b-e2b70a05a1d5
	I0318 13:13:45.614456    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:46.110426    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:46.110641    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:46.110641    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:46.110641    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:46.113899    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:46.113966    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:46.113966    2404 round_trippers.go:580]     Audit-Id: 346a2bea-3219-425d-ae99-2474463fd0c5
	I0318 13:13:46.113966    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:46.113966    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:46.113966    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:46.113966    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:46.113966    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:46 GMT
	I0318 13:13:46.114205    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:46.611621    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:46.611621    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:46.611621    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:46.611621    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:46.616497    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:46.616497    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:46.616497    2404 round_trippers.go:580]     Audit-Id: fd791b80-b134-49f6-acee-d89eba8b8226
	I0318 13:13:46.616497    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:46.616497    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:46.616497    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:46.616497    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:46.616497    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:46 GMT
	I0318 13:13:46.616497    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:46.617240    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:47.111795    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:47.111795    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:47.111795    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:47.111795    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:47.116165    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:47.116412    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:47.116412    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:47.116412    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:47.116412    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:47 GMT
	I0318 13:13:47.116412    2404 round_trippers.go:580]     Audit-Id: 59ff85b6-3ee9-4e56-8eb0-070acfcfa8e7
	I0318 13:13:47.116412    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:47.116412    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:47.117032    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:47.610124    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:47.610124    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:47.610124    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:47.610124    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:47.614778    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:47.614778    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:47.615187    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:47.615187    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:47 GMT
	I0318 13:13:47.615187    2404 round_trippers.go:580]     Audit-Id: 90c1b5b1-8c25-4603-9b44-50f506cf6924
	I0318 13:13:47.615240    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:47.615240    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:47.615265    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:47.615371    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:48.110830    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:48.110914    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:48.110914    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:48.110914    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:48.114250    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:48.114880    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:48.114880    2404 round_trippers.go:580]     Audit-Id: eefce027-4359-4ef1-b1ec-93aed862d1fe
	I0318 13:13:48.114880    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:48.114880    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:48.114880    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:48.114880    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:48.114956    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:48 GMT
	I0318 13:13:48.115113    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:48.622801    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:48.622801    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:48.622895    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:48.622895    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:48.626732    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:48.626785    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:48.626785    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:48.626785    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:48.626785    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:48 GMT
	I0318 13:13:48.626785    2404 round_trippers.go:580]     Audit-Id: 4f6c8c30-e5e6-40f0-8645-c84006e0e2f4
	I0318 13:13:48.626785    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:48.626785    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:48.626785    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:48.627339    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:49.124182    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:49.124182    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:49.124182    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:49.124182    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:49.128154    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:49.128154    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:49.128154    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:49.128154    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:49.128154    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:49 GMT
	I0318 13:13:49.128154    2404 round_trippers.go:580]     Audit-Id: 22b867bb-d0a8-43f6-8647-8d840af97e1d
	I0318 13:13:49.128154    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:49.128154    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:49.128441    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:49.613709    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:49.613709    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:49.614058    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:49.614058    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:49.617951    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:49.618469    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:49.618524    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:49.618524    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:49.618524    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:49.618524    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:49 GMT
	I0318 13:13:49.618524    2404 round_trippers.go:580]     Audit-Id: c88c6068-14ff-49e1-b4a4-94fd45316b56
	I0318 13:13:49.618524    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:49.618524    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:50.116316    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:50.116316    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:50.116316    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:50.116316    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:50.119625    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:50.119625    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:50.119625    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:50.120177    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:50 GMT
	I0318 13:13:50.120177    2404 round_trippers.go:580]     Audit-Id: 2084ebd7-e011-42fc-a6b7-262e1e4b3e1b
	I0318 13:13:50.120177    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:50.120177    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:50.120177    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:50.120333    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:50.615197    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:50.615197    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:50.615197    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:50.615197    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:50.619177    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:50.619177    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:50.619848    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:50.619848    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:50.619848    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:50.619848    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:50 GMT
	I0318 13:13:50.619848    2404 round_trippers.go:580]     Audit-Id: 32d4b324-484d-412f-a158-44ee00ca25a0
	I0318 13:13:50.619848    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:50.620051    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:51.117535    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:51.117535    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:51.117535    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:51.117535    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:51.122115    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:51.122568    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:51.122568    2404 round_trippers.go:580]     Audit-Id: 427e7bd5-152f-4a4c-bc38-86b68645ee37
	I0318 13:13:51.122568    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:51.122568    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:51.122568    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:51.122568    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:51.122568    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:51 GMT
	I0318 13:13:51.122568    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:51.123170    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:51.620065    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:51.620065    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:51.620065    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:51.620065    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:51.623346    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:51.624306    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:51.624306    2404 round_trippers.go:580]     Audit-Id: 26e99b0d-3296-4036-b383-014fab93b189
	I0318 13:13:51.624306    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:51.624351    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:51.624351    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:51.624351    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:51.624351    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:51 GMT
	I0318 13:13:51.624568    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:52.120817    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:52.121037    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:52.121037    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:52.121037    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:52.132503    2404 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 13:13:52.132503    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:52.132503    2404 round_trippers.go:580]     Audit-Id: 550f38e1-f87d-429d-8fca-c1b656ee0400
	I0318 13:13:52.132503    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:52.133340    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:52.133340    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:52.133340    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:52.133391    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:52 GMT
	I0318 13:13:52.133499    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:52.619055    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:52.619055    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:52.619055    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:52.619325    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:52.627408    2404 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 13:13:52.627408    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:52.627408    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:52 GMT
	I0318 13:13:52.627408    2404 round_trippers.go:580]     Audit-Id: c39adb31-86ea-450d-839e-ce30faba7eec
	I0318 13:13:52.627408    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:52.627408    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:52.627408    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:52.627408    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:52.627408    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:53.121080    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:53.121147    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:53.121147    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:53.121147    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:53.125116    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:53.125185    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:53.125185    2404 round_trippers.go:580]     Audit-Id: 7cf63368-2e54-4321-b7ce-ae7a20bfa85a
	I0318 13:13:53.125185    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:53.125185    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:53.125185    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:53.125298    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:53.125298    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:53 GMT
	I0318 13:13:53.125298    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:53.125298    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:53.609845    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:53.609845    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:53.609845    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:53.609845    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:53.614067    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:53.614904    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:53.614904    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:53.614904    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:53.614904    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:53.614904    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:53 GMT
	I0318 13:13:53.614904    2404 round_trippers.go:580]     Audit-Id: 0adf0c4f-4527-4543-8b20-a0fa432180c7
	I0318 13:13:53.614904    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:53.615119    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:54.111943    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:54.112001    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:54.112057    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:54.112057    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:54.116318    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:54.117434    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:54.117434    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:54 GMT
	I0318 13:13:54.117434    2404 round_trippers.go:580]     Audit-Id: 1347467c-ea74-4ad9-8016-1992b2634c17
	I0318 13:13:54.117531    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:54.117531    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:54.117531    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:54.117531    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:54.117677    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:54.613133    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:54.613133    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:54.613133    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:54.613133    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:54.617474    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:54.617550    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:54.617550    2404 round_trippers.go:580]     Audit-Id: 6aa492a0-7e18-4370-ba1e-6c3fef3f434b
	I0318 13:13:54.617550    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:54.617550    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:54.617550    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:54.617550    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:54.617550    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:54 GMT
	I0318 13:13:54.617550    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:55.115841    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:55.116001    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:55.116001    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:55.116001    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:55.119906    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:55.119906    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:55.119906    2404 round_trippers.go:580]     Audit-Id: b71b23f0-bf3d-468c-9ae0-3c18a03427d6
	I0318 13:13:55.119906    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:55.119906    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:55.119906    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:55.119906    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:55.119906    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:55 GMT
	I0318 13:13:55.119906    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:55.620230    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:55.620230    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:55.620230    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:55.620230    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:55.623994    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:55.623994    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:55.623994    2404 round_trippers.go:580]     Audit-Id: db34228e-c138-47dd-b9a8-47ff512a0b1b
	I0318 13:13:55.624759    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:55.624759    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:55.624759    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:55.624759    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:55.624759    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:55 GMT
	I0318 13:13:55.624838    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:55.624838    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:56.109514    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:56.109514    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:56.109514    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:56.109514    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:56.113010    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:56.113824    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:56.113824    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:56.113824    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:56.113824    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:56 GMT
	I0318 13:13:56.113824    2404 round_trippers.go:580]     Audit-Id: 62833683-edc2-45dd-9202-f51eb2d73301
	I0318 13:13:56.113824    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:56.113824    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:56.114068    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:56.612778    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:56.612778    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:56.612778    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:56.612778    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:56.616560    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:56.616560    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:56.616560    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:56 GMT
	I0318 13:13:56.616560    2404 round_trippers.go:580]     Audit-Id: d2b28e6d-52e6-40f3-b593-1b02e368e9de
	I0318 13:13:56.617580    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:56.617580    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:56.617613    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:56.617613    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:56.617654    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:57.113194    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:57.113194    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:57.113194    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:57.113194    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:57.117950    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:57.117950    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:57.117950    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:57.118105    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:57.118105    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:57.118105    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:57.118105    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:57 GMT
	I0318 13:13:57.118105    2404 round_trippers.go:580]     Audit-Id: 091f22d8-7e61-4a65-adc7-92eaf2423d29
	I0318 13:13:57.118292    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:57.614194    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:57.614432    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:57.614432    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:57.614432    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:57.619955    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:13:57.620034    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:57.620034    2404 round_trippers.go:580]     Audit-Id: cd702ef5-5512-468b-b705-3d4bfdefe55f
	I0318 13:13:57.620034    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:57.620034    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:57.620034    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:57.620034    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:57.620034    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:57 GMT
	I0318 13:13:57.620204    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:58.116792    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:58.116792    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:58.116792    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:58.116792    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:58.121622    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:58.121622    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:58.121622    2404 round_trippers.go:580]     Audit-Id: 9fb0d5e9-b8b0-4b22-9135-a563be7c693f
	I0318 13:13:58.121622    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:58.121622    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:58.121622    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:58.121622    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:58.121622    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:58 GMT
	I0318 13:13:58.121622    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:58.122202    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:13:58.618796    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:58.618796    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:58.618796    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:58.618796    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:58.622128    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:58.622531    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:58.622531    2404 round_trippers.go:580]     Audit-Id: 6e564437-be1a-4906-9e05-652aec7345ba
	I0318 13:13:58.622531    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:58.622531    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:58.622531    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:58.622531    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:58.622531    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:58 GMT
	I0318 13:13:58.622800    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:59.120475    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:59.120475    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:59.120475    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:59.120475    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:59.124980    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:13:59.125413    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:59.125413    2404 round_trippers.go:580]     Audit-Id: 8a28825c-93d1-4df0-8ae4-da9c560a6d38
	I0318 13:13:59.125413    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:59.125413    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:59.125413    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:59.125413    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:59.125413    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:59 GMT
	I0318 13:13:59.125567    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:13:59.623129    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:13:59.623208    2404 round_trippers.go:469] Request Headers:
	I0318 13:13:59.623208    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:13:59.623208    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:13:59.627076    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:13:59.627276    2404 round_trippers.go:577] Response Headers:
	I0318 13:13:59.627276    2404 round_trippers.go:580]     Audit-Id: 56c89a6c-fd62-41db-ab83-a3e63440160a
	I0318 13:13:59.627276    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:13:59.627276    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:13:59.627276    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:13:59.627276    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:13:59.627276    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:13:59 GMT
	I0318 13:13:59.627453    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:00.111367    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:00.111591    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:00.111591    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:00.111591    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:00.115522    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:00.115522    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:00.115522    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:00 GMT
	I0318 13:14:00.115522    2404 round_trippers.go:580]     Audit-Id: 0a8cf84e-b415-40ca-be28-3c1b5ba4aae0
	I0318 13:14:00.115522    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:00.115522    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:00.115522    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:00.115522    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:00.116564    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:00.609417    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:00.609417    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:00.609417    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:00.609417    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:00.612164    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:14:00.613016    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:00.613104    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:00.613104    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:00.613104    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:00 GMT
	I0318 13:14:00.613104    2404 round_trippers.go:580]     Audit-Id: f1898048-cb1a-4bd4-ad0a-5d636f58520f
	I0318 13:14:00.613250    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:00.613250    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:00.613250    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:00.614297    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:14:01.108790    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:01.108919    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:01.108919    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:01.108919    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:01.112270    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:01.112435    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:01.112435    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:01.112435    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:01.112435    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:01 GMT
	I0318 13:14:01.112435    2404 round_trippers.go:580]     Audit-Id: 35b4b404-477c-4a28-a1a3-ceb889a01d98
	I0318 13:14:01.112435    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:01.112435    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:01.112705    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:01.623018    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:01.623290    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:01.623290    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:01.623290    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:01.627142    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:01.627142    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:01.627142    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:01.627142    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:01.627142    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:01.627142    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:01.627142    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:01 GMT
	I0318 13:14:01.627142    2404 round_trippers.go:580]     Audit-Id: 5d716382-6cd3-46b9-83b6-2a62c163bc19
	I0318 13:14:01.627142    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:02.109390    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:02.109541    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:02.109541    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:02.109541    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:02.115144    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:14:02.115144    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:02.115144    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:02.115144    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:02.115144    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:02.115144    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:02.115144    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:02 GMT
	I0318 13:14:02.115144    2404 round_trippers.go:580]     Audit-Id: 1eeae02d-dfcb-4091-af4d-03374edb0640
	I0318 13:14:02.115732    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:02.610229    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:02.610229    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:02.610229    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:02.610229    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:02.613948    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:02.613948    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:02.613948    2404 round_trippers.go:580]     Audit-Id: 2c2dfe54-4308-4dbc-88b6-e9345e8b6ebf
	I0318 13:14:02.613948    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:02.613948    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:02.614466    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:02.614466    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:02.614466    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:02 GMT
	I0318 13:14:02.614734    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:02.614734    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:14:03.108846    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:03.108846    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:03.108846    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:03.108846    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:03.113783    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:03.113783    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:03.113783    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:03.113783    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:03.113783    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:03.113783    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:03 GMT
	I0318 13:14:03.113783    2404 round_trippers.go:580]     Audit-Id: 0055b9d6-514a-437a-a048-a14638a6fdb4
	I0318 13:14:03.113783    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:03.114771    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:03.622716    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:03.622716    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:03.622716    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:03.622716    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:03.627535    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:03.627535    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:03.627535    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:03.627626    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:03.627626    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:03.627626    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:03.627626    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:03 GMT
	I0318 13:14:03.627626    2404 round_trippers.go:580]     Audit-Id: 33342e5b-8ec6-4adf-ac6e-d2a0d9209059
	I0318 13:14:03.627727    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:04.112978    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:04.112978    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:04.112978    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:04.112978    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:04.116371    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:04.116371    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:04.117184    2404 round_trippers.go:580]     Audit-Id: ccbb5b25-f798-4f56-bcfc-bd9e9202ca01
	I0318 13:14:04.117184    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:04.117184    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:04.117184    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:04.117184    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:04.117184    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:04 GMT
	I0318 13:14:04.117257    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:04.611132    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:04.611257    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:04.611257    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:04.611257    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:04.615492    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:04.615492    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:04.615492    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:04.615492    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:04.615492    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:04.615492    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:04 GMT
	I0318 13:14:04.615492    2404 round_trippers.go:580]     Audit-Id: ac261459-9275-4046-9599-0b14817ccf1b
	I0318 13:14:04.615492    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:04.615492    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:04.615492    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:14:05.111537    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:05.111537    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:05.111671    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:05.111671    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:05.116346    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:05.116346    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:05.116414    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:05.116414    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:05.116414    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:05 GMT
	I0318 13:14:05.116414    2404 round_trippers.go:580]     Audit-Id: 7365457d-4e0f-4b04-af0c-06789fd168df
	I0318 13:14:05.116414    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:05.116414    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:05.116577    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:05.609299    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:05.609299    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:05.609299    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:05.609299    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:05.613082    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:05.613602    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:05.613602    2404 round_trippers.go:580]     Audit-Id: 10ffed27-6b85-4f94-8c2c-936af2228599
	I0318 13:14:05.613602    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:05.613602    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:05.613602    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:05.613602    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:05.613602    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:05 GMT
	I0318 13:14:05.613884    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:06.112506    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:06.112599    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:06.112599    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:06.112599    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:06.117098    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:06.117815    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:06.117815    2404 round_trippers.go:580]     Audit-Id: d32e0315-2298-4106-8b32-6af0adbdf275
	I0318 13:14:06.117815    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:06.117815    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:06.117815    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:06.117815    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:06.117815    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:06 GMT
	I0318 13:14:06.117976    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:06.614978    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:06.615043    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:06.615102    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:06.615102    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:06.619215    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:06.619215    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:06.619215    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:06.619215    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:06 GMT
	I0318 13:14:06.619215    2404 round_trippers.go:580]     Audit-Id: acefe811-9ba9-465c-b424-9f589c7cdf27
	I0318 13:14:06.619215    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:06.619215    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:06.619215    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:06.619215    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:06.619974    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:14:07.116365    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:07.116365    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:07.116365    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:07.116365    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:07.121778    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:14:07.122509    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:07.122592    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:07.122592    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:07.122592    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:07.122592    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:07.122592    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:07 GMT
	I0318 13:14:07.122592    2404 round_trippers.go:580]     Audit-Id: 5aa56302-b13e-438a-9f4f-f0d3ad1f44a4
	I0318 13:14:07.122699    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:07.616754    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:07.616754    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:07.616857    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:07.616857    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:07.621134    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:07.621134    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:07.621134    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:07.621134    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:07.621134    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:07 GMT
	I0318 13:14:07.621134    2404 round_trippers.go:580]     Audit-Id: 2423a904-d47b-4c09-9a80-378d610a77d0
	I0318 13:14:07.621134    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:07.621134    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:07.621134    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:08.115356    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:08.115356    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:08.115356    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:08.115356    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:08.119951    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:08.119951    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:08.120034    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:08.120034    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:08.120034    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:08.120034    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:08 GMT
	I0318 13:14:08.120034    2404 round_trippers.go:580]     Audit-Id: cff39d6e-951f-4292-ad92-60bda0598910
	I0318 13:14:08.120034    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:08.120034    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:08.616002    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:08.616002    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:08.616105    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:08.616105    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:08.619814    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:08.619814    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:08.619814    2404 round_trippers.go:580]     Audit-Id: fe5967e2-babd-4bfa-a9e2-6830630ad927
	I0318 13:14:08.619814    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:08.619814    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:08.619814    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:08.619814    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:08.619814    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:08 GMT
	I0318 13:14:08.620638    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:08.620953    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:14:09.118296    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:09.118296    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:09.118296    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:09.118432    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:09.123888    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:14:09.123888    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:09.123888    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:09.123888    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:09 GMT
	I0318 13:14:09.123888    2404 round_trippers.go:580]     Audit-Id: 89aecdff-983f-459e-9d25-7681ef78506e
	I0318 13:14:09.123888    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:09.123888    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:09.123888    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:09.124864    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:09.617498    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:09.617578    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:09.617578    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:09.617578    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:09.621060    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:09.621528    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:09.621590    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:09.621590    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:09 GMT
	I0318 13:14:09.621590    2404 round_trippers.go:580]     Audit-Id: 85ce4e06-af1e-47d0-b724-5145eb22490b
	I0318 13:14:09.621590    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:09.621590    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:09.621590    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:09.621829    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:10.118140    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:10.118213    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:10.118213    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:10.118213    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:10.121065    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:14:10.122149    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:10.122149    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:10 GMT
	I0318 13:14:10.122149    2404 round_trippers.go:580]     Audit-Id: a63c5583-c706-4a4f-8c15-b7d39e591b41
	I0318 13:14:10.122149    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:10.122149    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:10.122149    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:10.122149    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:10.122149    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:10.615703    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:10.615703    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:10.615703    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:10.615703    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:10.620884    2404 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 13:14:10.621215    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:10.621215    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:10.621215    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:10.621215    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:10.621215    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:10.621215    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:10 GMT
	I0318 13:14:10.621215    2404 round_trippers.go:580]     Audit-Id: 660719e6-9825-42ec-9a95-ac1372bcdbe3
	I0318 13:14:10.622041    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:10.622041    2404 node_ready.go:53] node "multinode-894400-m02" has status "Ready":"False"
	I0318 13:14:11.115049    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:11.115049    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.115049    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.115049    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.118694    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:11.119683    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.119703    2404 round_trippers.go:580]     Audit-Id: 5721efc7-a2b1-47ab-818d-532c411fe139
	I0318 13:14:11.119703    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.119703    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.119703    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.119703    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.119703    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.119865    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2100","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4067 chars]
	I0318 13:14:11.616455    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:11.616455    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.616455    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.616455    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.625972    2404 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0318 13:14:11.625972    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.625972    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.625972    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.625972    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.625972    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.625972    2404 round_trippers.go:580]     Audit-Id: e8be0c01-5a88-4482-900f-b5ddcb063c2c
	I0318 13:14:11.625972    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.626948    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2152","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3933 chars]
	I0318 13:14:11.626948    2404 node_ready.go:49] node "multinode-894400-m02" has status "Ready":"True"
	I0318 13:14:11.626948    2404 node_ready.go:38] duration metric: took 47.5187391s for node "multinode-894400-m02" to be "Ready" ...
	I0318 13:14:11.626948    2404 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:14:11.626948    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods
	I0318 13:14:11.626948    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.626948    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.626948    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.637960    2404 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 13:14:11.637960    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.637960    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.637960    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.637960    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.637960    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.637960    2404 round_trippers.go:580]     Audit-Id: 40322691-ad10-4ef3-8af2-011491952fd4
	I0318 13:14:11.637960    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.640888    2404 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2152"},"items":[{"metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1918","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82613 chars]
	I0318 13:14:11.644284    2404 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-456tm" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:11.644284    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-456tm
	I0318 13:14:11.644284    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.644284    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.644284    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.648934    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:11.648934    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.649784    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.649784    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.649784    2404 round_trippers.go:580]     Audit-Id: 93b5f16f-e9dd-4dee-9595-8a4101fcf39b
	I0318 13:14:11.649784    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.649784    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.649784    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.650145    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-456tm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1a018c55-846b-4dc2-992c-dc8fd82a6c67","resourceVersion":"1918","creationTimestamp":"2024-03-18T12:47:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24e9a67e-3fda-47e7-8dcb-794b1920d0f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24e9a67e-3fda-47e7-8dcb-794b1920d0f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I0318 13:14:11.650702    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:14:11.650751    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.650751    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.650823    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.652975    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:14:11.652975    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.652975    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.652975    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.652975    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.652975    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.652975    2404 round_trippers.go:580]     Audit-Id: f4a52d1f-24ad-42cd-b7da-81450a9b7b10
	I0318 13:14:11.652975    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.654024    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:14:11.654024    2404 pod_ready.go:92] pod "coredns-5dd5756b68-456tm" in "kube-system" namespace has status "Ready":"True"
	I0318 13:14:11.654024    2404 pod_ready.go:81] duration metric: took 9.7401ms for pod "coredns-5dd5756b68-456tm" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:11.654024    2404 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:11.654024    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-894400
	I0318 13:14:11.654024    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.654024    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.654024    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.658984    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:11.658984    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.659679    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.659704    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.659704    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.659704    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.659704    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.659704    2404 round_trippers.go:580]     Audit-Id: d7f08ab6-973b-40d6-b2bd-258c37ded939
	I0318 13:14:11.659920    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-894400","namespace":"kube-system","uid":"d4c040b9-a604-4a0d-80ee-7436541af60c","resourceVersion":"1841","creationTimestamp":"2024-03-18T13:09:49Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.30.130.156:2379","kubernetes.io/config.hash":"743a549b698f93b8586a236f83c90556","kubernetes.io/config.mirror":"743a549b698f93b8586a236f83c90556","kubernetes.io/config.seen":"2024-03-18T13:09:42.924670260Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:09:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5873 chars]
	I0318 13:14:11.660524    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:14:11.660587    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.660587    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.660587    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.665205    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:11.665263    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.665326    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.665326    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.665326    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.665326    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.665326    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.665378    2404 round_trippers.go:580]     Audit-Id: 7ffe996e-b31e-438f-8816-42078c3ee8d3
	I0318 13:14:11.665888    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:14:11.666287    2404 pod_ready.go:92] pod "etcd-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 13:14:11.666375    2404 pod_ready.go:81] duration metric: took 12.3506ms for pod "etcd-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:11.666375    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:11.666494    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-894400
	I0318 13:14:11.666494    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.666494    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.666494    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.674116    2404 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 13:14:11.674958    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.674958    2404 round_trippers.go:580]     Audit-Id: 0a850913-bf78-407b-af9a-fa7f430ec082
	I0318 13:14:11.674958    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.674958    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.674958    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.674958    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.674958    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.675219    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-894400","namespace":"kube-system","uid":"46152b8e-0bda-427e-a1ad-c79506b56763","resourceVersion":"1812","creationTimestamp":"2024-03-18T13:09:49Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.30.130.156:8443","kubernetes.io/config.hash":"6096c2227c4230453f65f86ebdcd0d95","kubernetes.io/config.mirror":"6096c2227c4230453f65f86ebdcd0d95","kubernetes.io/config.seen":"2024-03-18T13:09:42.869643374Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:09:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7409 chars]
	I0318 13:14:11.675219    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:14:11.675219    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.675219    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.675219    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.678244    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:14:11.678244    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.678244    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.678244    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.678244    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.678321    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.678321    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.678321    2404 round_trippers.go:580]     Audit-Id: afabb18c-3ffb-485a-984d-847c1ede82da
	I0318 13:14:11.678426    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:14:11.678822    2404 pod_ready.go:92] pod "kube-apiserver-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 13:14:11.678893    2404 pod_ready.go:81] duration metric: took 12.5177ms for pod "kube-apiserver-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:11.678893    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:11.678964    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-894400
	I0318 13:14:11.679029    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.679029    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.679029    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.682114    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:11.682114    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.682114    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.682114    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.682114    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.682114    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.682114    2404 round_trippers.go:580]     Audit-Id: 29717c2b-8183-46a4-aa4b-ad5a916adccd
	I0318 13:14:11.682114    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.683105    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-894400","namespace":"kube-system","uid":"4ad5fc15-53ba-4ebb-9a63-b8572cd9c834","resourceVersion":"1813","creationTimestamp":"2024-03-18T12:47:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d340aced56ba169ecac1e3ac58ad57fe","kubernetes.io/config.mirror":"d340aced56ba169ecac1e3ac58ad57fe","kubernetes.io/config.seen":"2024-03-18T12:47:20.228444892Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I0318 13:14:11.683105    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:14:11.683105    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.683105    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.683105    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.687124    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:11.687124    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.687124    2404 round_trippers.go:580]     Audit-Id: 2cb449ad-f368-4ed0-b70a-3b6aeda7e800
	I0318 13:14:11.687124    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.687124    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.687124    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.687124    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.687124    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.687124    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:14:11.687124    2404 pod_ready.go:92] pod "kube-controller-manager-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 13:14:11.687124    2404 pod_ready.go:81] duration metric: took 8.2309ms for pod "kube-controller-manager-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:11.688127    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-745w9" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:11.819536    2404 request.go:629] Waited for 131.4083ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-745w9
	I0318 13:14:11.820005    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-745w9
	I0318 13:14:11.820079    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:11.820079    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:11.820079    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:11.822351    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:14:11.823314    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:11.823314    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:11.823314    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:11.823314    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:11.823314    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:11 GMT
	I0318 13:14:11.823314    2404 round_trippers.go:580]     Audit-Id: d079bbef-e86c-48a2-b5f8-75a6bc5e0870
	I0318 13:14:11.823314    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:11.823615    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-745w9","generateName":"kube-proxy-","namespace":"kube-system","uid":"d385fe06-f516-440d-b9ed-37c2d4a81050","resourceVersion":"1698","creationTimestamp":"2024-03-18T12:55:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"da714af0-88aa-4fc4-b73d-4de837f06114","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da714af0-88aa-4fc4-b73d-4de837f06114\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5771 chars]
	I0318 13:14:12.024607    2404 request.go:629] Waited for 200.3709ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m03
	I0318 13:14:12.024607    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m03
	I0318 13:14:12.024866    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:12.024866    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:12.024866    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:12.027062    2404 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 13:14:12.028143    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:12.028143    2404 round_trippers.go:580]     Audit-Id: f9ed5858-2d70-497f-afff-f805ba926149
	I0318 13:14:12.028143    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:12.028143    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:12.028143    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:12.028143    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:12.028143    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:12 GMT
	I0318 13:14:12.028143    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m03","uid":"1f8e594e-d4cc-4247-8064-01ac67ea2b15","resourceVersion":"1855","creationTimestamp":"2024-03-18T13:05:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_05_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:05:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4400 chars]
	I0318 13:14:12.028821    2404 pod_ready.go:97] node "multinode-894400-m03" hosting pod "kube-proxy-745w9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400-m03" has status "Ready":"Unknown"
	I0318 13:14:12.028947    2404 pod_ready.go:81] duration metric: took 340.8181ms for pod "kube-proxy-745w9" in "kube-system" namespace to be "Ready" ...
	E0318 13:14:12.028947    2404 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-894400-m03" hosting pod "kube-proxy-745w9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-894400-m03" has status "Ready":"Unknown"
	I0318 13:14:12.029001    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8bdmn" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:12.228270    2404 request.go:629] Waited for 199.2674ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8bdmn
	I0318 13:14:12.228477    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8bdmn
	I0318 13:14:12.228477    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:12.228477    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:12.228477    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:12.232048    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:12.232292    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:12.232292    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:12.232292    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:12.232292    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:12.232292    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:12 GMT
	I0318 13:14:12.232292    2404 round_trippers.go:580]     Audit-Id: 20faeed7-e8f9-4e56-8283-f9b496110406
	I0318 13:14:12.232292    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:12.232896    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8bdmn","generateName":"kube-proxy-","namespace":"kube-system","uid":"5c266b8a-9665-4365-93c6-2b5f1699d3ef","resourceVersion":"2116","creationTimestamp":"2024-03-18T12:50:34Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"da714af0-88aa-4fc4-b73d-4de837f06114","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:50:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da714af0-88aa-4fc4-b73d-4de837f06114\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5546 chars]
	I0318 13:14:12.431342    2404 request.go:629] Waited for 197.7133ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:12.431577    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400-m02
	I0318 13:14:12.431577    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:12.431577    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:12.431577    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:12.435315    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:12.435496    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:12.435496    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:12.435496    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:12 GMT
	I0318 13:14:12.435496    2404 round_trippers.go:580]     Audit-Id: eb7e4353-0433-4403-9feb-ed2fbf34f6ab
	I0318 13:14:12.435496    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:12.435496    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:12.435496    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:12.435787    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400-m02","uid":"1487daaa-7c0a-4f25-84c8-3e409d8d04b7","resourceVersion":"2155","creationTimestamp":"2024-03-18T13:13:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_18T13_13_23_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T13:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 3813 chars]
	I0318 13:14:12.435862    2404 pod_ready.go:92] pod "kube-proxy-8bdmn" in "kube-system" namespace has status "Ready":"True"
	I0318 13:14:12.435862    2404 pod_ready.go:81] duration metric: took 406.8577ms for pod "kube-proxy-8bdmn" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:12.435862    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mc5tv" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:12.619046    2404 request.go:629] Waited for 182.9274ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mc5tv
	I0318 13:14:12.619248    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mc5tv
	I0318 13:14:12.619285    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:12.619308    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:12.619308    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:12.622929    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:12.623808    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:12.623808    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:12.623808    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:12.623808    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:12 GMT
	I0318 13:14:12.623808    2404 round_trippers.go:580]     Audit-Id: fa7c5125-bd6b-4d47-9c1f-80fd49c7b7cc
	I0318 13:14:12.623808    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:12.623808    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:12.623808    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mc5tv","generateName":"kube-proxy-","namespace":"kube-system","uid":"0afe25f8-cbd6-412b-8698-7b547d1d49ca","resourceVersion":"1799","creationTimestamp":"2024-03-18T12:47:41Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"da714af0-88aa-4fc4-b73d-4de837f06114","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da714af0-88aa-4fc4-b73d-4de837f06114\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0318 13:14:12.822087    2404 request.go:629] Waited for 197.0373ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:14:12.822354    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:14:12.822354    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:12.822354    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:12.822354    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:12.826458    2404 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 13:14:12.826458    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:12.826458    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:12.827068    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:12.827068    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:12.827068    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:12 GMT
	I0318 13:14:12.827068    2404 round_trippers.go:580]     Audit-Id: f443ba05-744f-46c0-a46e-1fe1733e628a
	I0318 13:14:12.827068    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:12.827546    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:14:12.827965    2404 pod_ready.go:92] pod "kube-proxy-mc5tv" in "kube-system" namespace has status "Ready":"True"
	I0318 13:14:12.827965    2404 pod_ready.go:81] duration metric: took 392.1005ms for pod "kube-proxy-mc5tv" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:12.828115    2404 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:13.025603    2404 request.go:629] Waited for 197.3055ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-894400
	I0318 13:14:13.025788    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-894400
	I0318 13:14:13.025788    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:13.025788    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:13.025788    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:13.029574    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:13.030436    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:13.030436    2404 round_trippers.go:580]     Audit-Id: 08083a80-00eb-4108-858c-03576a1f71b8
	I0318 13:14:13.030436    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:13.030436    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:13.030436    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:13.030436    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:13.030436    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:13 GMT
	I0318 13:14:13.030436    2404 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-894400","namespace":"kube-system","uid":"f47703ce-5a82-466e-ac8e-ef6b8cc07e6c","resourceVersion":"1822","creationTimestamp":"2024-03-18T12:47:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1c745e9b917877b1ff3c90ed02e9a79a","kubernetes.io/config.mirror":"1c745e9b917877b1ff3c90ed02e9a79a","kubernetes.io/config.seen":"2024-03-18T12:47:28.428225123Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-18T12:47:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I0318 13:14:13.228623    2404 request.go:629] Waited for 197.2127ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:14:13.228840    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes/multinode-894400
	I0318 13:14:13.228840    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:13.228930    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:13.228930    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:13.235271    2404 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 13:14:13.235271    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:13.235271    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:13.235271    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:13.235271    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:13 GMT
	I0318 13:14:13.235271    2404 round_trippers.go:580]     Audit-Id: 5a4436b1-d920-490f-b509-649039179d70
	I0318 13:14:13.235271    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:13.235271    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:13.235271    2404 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-18T12:47:24Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0318 13:14:13.236151    2404 pod_ready.go:92] pod "kube-scheduler-multinode-894400" in "kube-system" namespace has status "Ready":"True"
	I0318 13:14:13.236151    2404 pod_ready.go:81] duration metric: took 408.0332ms for pod "kube-scheduler-multinode-894400" in "kube-system" namespace to be "Ready" ...
	I0318 13:14:13.236268    2404 pod_ready.go:38] duration metric: took 1.6093081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:14:13.236268    2404 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:14:13.248610    2404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:14:13.275039    2404 system_svc.go:56] duration metric: took 38.7704ms WaitForService to wait for kubelet
	I0318 13:14:13.275039    2404 kubeadm.go:576] duration metric: took 49.4839183s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:14:13.275039    2404 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:14:13.429371    2404 request.go:629] Waited for 154.1021ms due to client-side throttling, not priority and fairness, request: GET:https://172.30.130.156:8443/api/v1/nodes
	I0318 13:14:13.429537    2404 round_trippers.go:463] GET https://172.30.130.156:8443/api/v1/nodes
	I0318 13:14:13.429537    2404 round_trippers.go:469] Request Headers:
	I0318 13:14:13.429537    2404 round_trippers.go:473]     Accept: application/json, */*
	I0318 13:14:13.429537    2404 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0318 13:14:13.433336    2404 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 13:14:13.433336    2404 round_trippers.go:577] Response Headers:
	I0318 13:14:13.433813    2404 round_trippers.go:580]     Date: Mon, 18 Mar 2024 13:14:13 GMT
	I0318 13:14:13.433813    2404 round_trippers.go:580]     Audit-Id: b2ae4003-8149-4408-8821-50283f2c82f2
	I0318 13:14:13.433813    2404 round_trippers.go:580]     Cache-Control: no-cache, private
	I0318 13:14:13.433813    2404 round_trippers.go:580]     Content-Type: application/json
	I0318 13:14:13.433813    2404 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 626d116c-4b00-492f-beb9-72444e44eb4a
	I0318 13:14:13.433813    2404 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3100ec4a-98d5-4254-8f19-dfbc29b40baf
	I0318 13:14:13.434364    2404 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2158"},"items":[{"metadata":{"name":"multinode-894400","uid":"6d2d62eb-007c-4f8e-8361-bb41e5453afd","resourceVersion":"1877","creationTimestamp":"2024-03-18T12:47:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-894400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"16bdcbec856cf730004e5bed78d1b7625f13388a","minikube.k8s.io/name":"multinode-894400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_18T12_47_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15489 chars]
	I0318 13:14:13.435243    2404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:14:13.435320    2404 node_conditions.go:123] node cpu capacity is 2
	I0318 13:14:13.435320    2404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:14:13.435320    2404 node_conditions.go:123] node cpu capacity is 2
	I0318 13:14:13.435320    2404 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:14:13.435320    2404 node_conditions.go:123] node cpu capacity is 2
	I0318 13:14:13.435320    2404 node_conditions.go:105] duration metric: took 160.28ms to run NodePressure ...
	I0318 13:14:13.435320    2404 start.go:240] waiting for startup goroutines ...
	I0318 13:14:13.435426    2404 start.go:254] writing updated cluster config ...
	I0318 13:14:13.439615    2404 out.go:177] 
	I0318 13:14:13.442619    2404 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:14:13.453409    2404 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:14:13.453409    2404 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\config.json ...
	I0318 13:14:13.458870    2404 out.go:177] * Starting "multinode-894400-m03" worker node in "multinode-894400" cluster
	I0318 13:14:13.462497    2404 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 13:14:13.462497    2404 cache.go:56] Caching tarball of preloaded images
	I0318 13:14:13.462497    2404 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 13:14:13.462497    2404 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 13:14:13.463889    2404 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-894400\config.json ...
	I0318 13:14:13.469212    2404 start.go:360] acquireMachinesLock for multinode-894400-m03: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:14:13.469466    2404 start.go:364] duration metric: took 253.5µs to acquireMachinesLock for "multinode-894400-m03"
	I0318 13:14:13.469629    2404 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:14:13.469629    2404 fix.go:54] fixHost starting: m03
	I0318 13:14:13.469629    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m03 ).state
	I0318 13:14:15.495083    2404 main.go:141] libmachine: [stdout =====>] : Off
	
	I0318 13:14:15.495083    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:14:15.495866    2404 fix.go:112] recreateIfNeeded on multinode-894400-m03: state=Stopped err=<nil>
	W0318 13:14:15.495866    2404 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:14:15.499589    2404 out.go:177] * Restarting existing hyperv VM for "multinode-894400-m03" ...
	I0318 13:14:15.501877    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-894400-m03
	I0318 13:14:18.449755    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:14:18.449755    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:14:18.449755    2404 main.go:141] libmachine: Waiting for host to start...
	I0318 13:14:18.450089    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m03 ).state
	I0318 13:14:20.655376    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:14:20.655376    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:14:20.655813    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 13:14:23.166805    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:14:23.167442    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:14:24.181679    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m03 ).state
	I0318 13:14:26.301489    2404 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:14:26.302430    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:14:26.302430    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 13:14:28.744343    2404 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:14:28.744521    2404 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:14:29.751246    2404 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m03 ).state
	
	
	==> Docker <==
	Mar 18 13:10:57 multinode-894400 dockerd[1052]: 2024/03/18 13:10:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:00 multinode-894400 dockerd[1052]: 2024/03/18 13:11:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:01 multinode-894400 dockerd[1052]: 2024/03/18 13:11:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:04 multinode-894400 dockerd[1052]: 2024/03/18 13:11:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:04 multinode-894400 dockerd[1052]: 2024/03/18 13:11:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:04 multinode-894400 dockerd[1052]: 2024/03/18 13:11:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:04 multinode-894400 dockerd[1052]: 2024/03/18 13:11:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:04 multinode-894400 dockerd[1052]: 2024/03/18 13:11:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:04 multinode-894400 dockerd[1052]: 2024/03/18 13:11:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:04 multinode-894400 dockerd[1052]: 2024/03/18 13:11:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:04 multinode-894400 dockerd[1052]: 2024/03/18 13:11:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:05 multinode-894400 dockerd[1052]: 2024/03/18 13:11:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:05 multinode-894400 dockerd[1052]: 2024/03/18 13:11:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:05 multinode-894400 dockerd[1052]: 2024/03/18 13:11:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 18 13:11:05 multinode-894400 dockerd[1052]: 2024/03/18 13:11:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c5d2074be239f       8c811b4aec35f                                                                                         4 minutes ago       Running             busybox                   1                   e20878b8092c2       busybox-5b5d89c9d6-c2997
	3c3bc988c74cd       ead0a4a53df89                                                                                         4 minutes ago       Running             coredns                   1                   97583cc14f115       coredns-5dd5756b68-456tm
	eadcf41dad509       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       2                   41035eff3b7db       storage-provisioner
	c8e5ec25e910e       4950bb10b3f87                                                                                         5 minutes ago       Running             kindnet-cni               1                   86d74dec812cf       kindnet-hhsxh
	46c0cf90d385f       6e38f40d628db                                                                                         5 minutes ago       Exited              storage-provisioner       1                   41035eff3b7db       storage-provisioner
	163ccabc3882a       83f6cc407eed8                                                                                         5 minutes ago       Running             kube-proxy                1                   a9f21749669fe       kube-proxy-mc5tv
	5f0887d1e6913       73deb9a3f7025                                                                                         5 minutes ago       Running             etcd                      0                   354f3c44a34fc       etcd-multinode-894400
	66ee8be9fada7       e3db313c6dbc0                                                                                         5 minutes ago       Running             kube-scheduler            1                   6fb3325d3c100       kube-scheduler-multinode-894400
	fc4430c7fa204       7fe0e6f37db33                                                                                         5 minutes ago       Running             kube-apiserver            0                   bc7236a19957e       kube-apiserver-multinode-894400
	4ad6784a187d6       d058aa5ab969c                                                                                         5 minutes ago       Running             kube-controller-manager   1                   066206d4c52cb       kube-controller-manager-multinode-894400
	dd031b5cb1e85       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   a23c1189be7c3       busybox-5b5d89c9d6-c2997
	693a64f7472fd       ead0a4a53df89                                                                                         27 minutes ago      Exited              coredns                   0                   d001e299e996b       coredns-5dd5756b68-456tm
	c4d7018ad23a7       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              27 minutes ago      Exited              kindnet-cni               0                   a47b1fb60692c       kindnet-hhsxh
	9335855aab63d       83f6cc407eed8                                                                                         27 minutes ago      Exited              kube-proxy                0                   60e9cd749c8f6       kube-proxy-mc5tv
	e4d42739ce0e9       e3db313c6dbc0                                                                                         27 minutes ago      Exited              kube-scheduler            0                   82710777e700c       kube-scheduler-multinode-894400
	7aa5cf4ec378e       d058aa5ab969c                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   5485f509825d9       kube-controller-manager-multinode-894400
	
	
	==> coredns [3c3bc988c74c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa2b120b3007aba3ef70c07d73473d14986404bfc5683cceeb89e8950d5ace894afca7d43365324975f99d1b2da3d6473c722fe82795d39a4ee8b4b969b7aa71
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47251 - 801 "HINFO IN 2968659138506762197.6766024496084331989. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051583557s
	
	
	==> coredns [693a64f7472f] <==
	[INFO] 10.244.0.3:60982 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000727s
	[INFO] 10.244.0.3:53685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081s
	[INFO] 10.244.0.3:38117 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127701s
	[INFO] 10.244.0.3:38455 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000117101s
	[INFO] 10.244.0.3:50629 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121702s
	[INFO] 10.244.0.3:33301 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000487s
	[INFO] 10.244.0.3:38091 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138402s
	[INFO] 10.244.1.2:43364 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192902s
	[INFO] 10.244.1.2:42609 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060701s
	[INFO] 10.244.1.2:36443 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051301s
	[INFO] 10.244.1.2:56414 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000526s
	[INFO] 10.244.0.3:50774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137201s
	[INFO] 10.244.0.3:43237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000196902s
	[INFO] 10.244.0.3:38831 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059901s
	[INFO] 10.244.0.3:56163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122801s
	[INFO] 10.244.1.2:58305 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209602s
	[INFO] 10.244.1.2:58291 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000151202s
	[INFO] 10.244.1.2:33227 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184302s
	[INFO] 10.244.1.2:58179 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152102s
	[INFO] 10.244.0.3:46943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104101s
	[INFO] 10.244.0.3:58018 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000107001s
	[INFO] 10.244.0.3:35353 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119601s
	[INFO] 10.244.0.3:58763 - 5 "PTR IN 1.128.30.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075701s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-894400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-894400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=multinode-894400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T12_47_29_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:47:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-894400
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:14:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 12:47:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:10:23 +0000   Mon, 18 Mar 2024 13:10:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.130.156
	  Hostname:    multinode-894400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 80e7b822d2e94d26a09acd4a1bac452b
	  System UUID:                5c78c013-e4e8-1041-99c8-95cd760ef34f
	  Boot ID:                    a334ae39-1c10-417c-93ad-d28546d7793f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-c2997                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-5dd5756b68-456tm                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-894400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m11s
	  kube-system                 kindnet-hhsxh                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-894400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 kube-controller-manager-multinode-894400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-mc5tv                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-894400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 27m                    kube-proxy       
	  Normal  Starting                 5m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           27m                    node-controller  Node multinode-894400 event: Registered Node multinode-894400 in Controller
	  Normal  NodeReady                27m                    kubelet          Node multinode-894400 status is now: NodeReady
	  Normal  Starting                 5m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m17s (x8 over 5m18s)  kubelet          Node multinode-894400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m17s (x8 over 5m18s)  kubelet          Node multinode-894400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m17s (x7 over 5m18s)  kubelet          Node multinode-894400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m59s                  node-controller  Node multinode-894400 event: Registered Node multinode-894400 in Controller
	
	
	Name:               multinode-894400-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-894400-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=multinode-894400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T13_13_23_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:13:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-894400-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:14:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:14:11 +0000   Mon, 18 Mar 2024 13:13:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:14:11 +0000   Mon, 18 Mar 2024 13:13:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:14:11 +0000   Mon, 18 Mar 2024 13:13:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:14:11 +0000   Mon, 18 Mar 2024 13:14:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.30.130.185
	  Hostname:    multinode-894400-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe4b352c2918435c97dd6e74e14b8700
	  System UUID:                fa19d46a-a3a2-9249-8c21-1edbfcedff01
	  Boot ID:                    ca4ff0d6-6b4d-442e-8061-981434209b6b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-9twqb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                 kindnet-k5lpg               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-proxy-8bdmn            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24m                kube-proxy       
	  Normal  Starting                 81s                kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x5 over 24m)  kubelet          Node multinode-894400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x5 over 24m)  kubelet          Node multinode-894400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x5 over 24m)  kubelet          Node multinode-894400-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                24m                kubelet          Node multinode-894400-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  98s (x5 over 99s)  kubelet          Node multinode-894400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s (x5 over 99s)  kubelet          Node multinode-894400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s (x5 over 99s)  kubelet          Node multinode-894400-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           94s                node-controller  Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller
	  Normal  NodeReady                49s                kubelet          Node multinode-894400-m02 status is now: NodeReady
	
	
	Name:               multinode-894400-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-894400-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=multinode-894400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T13_05_26_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:05:25 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-894400-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:06:27 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 18 Mar 2024 13:05:34 +0000   Mon, 18 Mar 2024 13:07:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.30.137.140
	  Hostname:    multinode-894400-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 f96e7421441b46c0a5836e2d53b26708
	  System UUID:                7dae14c5-92ae-d842-8ce6-c446c0352eb2
	  Boot ID:                    7ef4b157-1893-48d2-9b87-d5f210c11477
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-zv9tv       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-proxy-745w9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  Starting                 9m32s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x5 over 19m)      kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x5 over 19m)      kubelet          Node multinode-894400-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x5 over 19m)      kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m                    kubelet          Node multinode-894400-m03 status is now: NodeReady
	  Normal  Starting                 9m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m35s (x2 over 9m35s)  kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m35s (x2 over 9m35s)  kubelet          Node multinode-894400-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m35s (x2 over 9m35s)  kubelet          Node multinode-894400-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m34s                  node-controller  Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller
	  Normal  NodeReady                9m26s                  kubelet          Node multinode-894400-m03 status is now: NodeReady
	  Normal  NodeNotReady             7m49s                  node-controller  Node multinode-894400-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           4m59s                  node-controller  Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller
	
	
	==> dmesg <==
	[  +4.800453] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.267636] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.056053] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	[  +6.778211] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar18 13:09] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.160643] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[ +25.236158] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	[  +0.093711] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.488652] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	[  +0.198307] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	[  +0.213157] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	[  +2.866452] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	[  +0.191537] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	[  +0.163904] systemd-fstab-generator[1255]: Ignoring "noauto" option for root device
	[  +0.280650] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	[  +0.822319] systemd-fstab-generator[1393]: Ignoring "noauto" option for root device
	[  +0.094744] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.177820] systemd-fstab-generator[1525]: Ignoring "noauto" option for root device
	[  +1.898187] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.227041] kauditd_printk_skb: 10 callbacks suppressed
	[  +4.065141] systemd-fstab-generator[3089]: Ignoring "noauto" option for root device
	[Mar18 13:10] kauditd_printk_skb: 70 callbacks suppressed
	[Mar18 13:14] hrtimer: interrupt took 864503 ns
	
	
	==> etcd [5f0887d1e691] <==
	{"level":"info","ts":"2024-03-18T13:09:44.932128Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2db881e830cc2153","local-member-id":"c2557cd98fa8d31a","added-peer-id":"c2557cd98fa8d31a","added-peer-peer-urls":["https://172.30.129.141:2380"]}
	{"level":"info","ts":"2024-03-18T13:09:44.933388Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2db881e830cc2153","local-member-id":"c2557cd98fa8d31a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:09:44.933717Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:09:44.946226Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:09:44.947818Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:09:44.948803Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:09:44.954567Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T13:09:44.954988Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"c2557cd98fa8d31a","initial-advertise-peer-urls":["https://172.30.130.156:2380"],"listen-peer-urls":["https://172.30.130.156:2380"],"advertise-client-urls":["https://172.30.130.156:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.30.130.156:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T13:09:44.955173Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T13:09:44.954599Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.30.130.156:2380"}
	{"level":"info","ts":"2024-03-18T13:09:44.956126Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.30.130.156:2380"}
	{"level":"info","ts":"2024-03-18T13:09:46.775466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-18T13:09:46.775581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-18T13:09:46.775704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a received MsgPreVoteResp from c2557cd98fa8d31a at term 2"}
	{"level":"info","ts":"2024-03-18T13:09:46.775731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became candidate at term 3"}
	{"level":"info","ts":"2024-03-18T13:09:46.77574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a received MsgVoteResp from c2557cd98fa8d31a at term 3"}
	{"level":"info","ts":"2024-03-18T13:09:46.775752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2557cd98fa8d31a became leader at term 3"}
	{"level":"info","ts":"2024-03-18T13:09:46.775764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c2557cd98fa8d31a elected leader c2557cd98fa8d31a at term 3"}
	{"level":"info","ts":"2024-03-18T13:09:46.782683Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c2557cd98fa8d31a","local-member-attributes":"{Name:multinode-894400 ClientURLs:[https://172.30.130.156:2379]}","request-path":"/0/members/c2557cd98fa8d31a/attributes","cluster-id":"2db881e830cc2153","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T13:09:46.78269Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:09:46.782706Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:09:46.783976Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.30.130.156:2379"}
	{"level":"info","ts":"2024-03-18T13:09:46.783993Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T13:09:46.788664Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T13:09:46.788817Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:15:00 up 6 min,  0 users,  load average: 1.51, 0.70, 0.29
	Linux multinode-894400 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c4d7018ad23a] <==
	I0318 13:06:40.810075       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:06:50.822556       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:06:50.822612       1 main.go:227] handling current node
	I0318 13:06:50.822667       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:06:50.822680       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:06:50.822925       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:06:50.823171       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:07:00.837923       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:07:00.838008       1 main.go:227] handling current node
	I0318 13:07:00.838022       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:07:00.838030       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:07:00.838429       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:07:00.838666       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:07:10.854207       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:07:10.854411       1 main.go:227] handling current node
	I0318 13:07:10.854444       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:07:10.854469       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:07:10.854879       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:07:10.855094       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:07:20.861534       1 main.go:223] Handling node with IPs: map[172.30.129.141:{}]
	I0318 13:07:20.861671       1 main.go:227] handling current node
	I0318 13:07:20.861685       1 main.go:223] Handling node with IPs: map[172.30.140.66:{}]
	I0318 13:07:20.861692       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:07:20.861818       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:07:20.861845       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [c8e5ec25e910] <==
	I0318 13:14:11.613734       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:14:21.621605       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:14:21.622228       1 main.go:227] handling current node
	I0318 13:14:21.622426       1 main.go:223] Handling node with IPs: map[172.30.130.185:{}]
	I0318 13:14:21.622516       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:14:21.622896       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:14:21.622977       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:14:31.639009       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:14:31.640307       1 main.go:227] handling current node
	I0318 13:14:31.640458       1 main.go:223] Handling node with IPs: map[172.30.130.185:{}]
	I0318 13:14:31.640488       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:14:31.640889       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:14:31.640926       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:14:41.648331       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:14:41.648507       1 main.go:227] handling current node
	I0318 13:14:41.648540       1 main.go:223] Handling node with IPs: map[172.30.130.185:{}]
	I0318 13:14:41.648549       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:14:41.649182       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:14:41.649284       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	I0318 13:14:51.665062       1 main.go:223] Handling node with IPs: map[172.30.130.156:{}]
	I0318 13:14:51.665151       1 main.go:227] handling current node
	I0318 13:14:51.665166       1 main.go:223] Handling node with IPs: map[172.30.130.185:{}]
	I0318 13:14:51.665173       1 main.go:250] Node multinode-894400-m02 has CIDR [10.244.1.0/24] 
	I0318 13:14:51.665746       1 main.go:223] Handling node with IPs: map[172.30.137.140:{}]
	I0318 13:14:51.665831       1 main.go:250] Node multinode-894400-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [fc4430c7fa20] <==
	I0318 13:09:48.245178       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 13:09:48.245796       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 13:09:48.231958       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0318 13:09:48.403749       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 13:09:48.426183       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 13:09:48.426213       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 13:09:48.426382       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 13:09:48.432175       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 13:09:48.433073       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 13:09:48.433297       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 13:09:48.444484       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 13:09:48.444708       1 aggregator.go:166] initial CRD sync complete...
	I0318 13:09:48.444961       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 13:09:48.445263       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 13:09:48.446443       1 cache.go:39] Caches are synced for autoregister controller
	I0318 13:09:48.471536       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 13:09:49.257477       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0318 13:09:49.806994       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.30.130.156]
	I0318 13:09:49.809655       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 13:09:49.821460       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 13:09:51.622752       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 13:09:51.799195       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 13:09:51.812022       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 13:09:51.930541       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 13:09:51.942099       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [4ad6784a187d] <==
	I0318 13:10:54.102713       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="266.302µs"
	I0318 13:10:54.115993       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="210.701µs"
	I0318 13:10:55.131550       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.807636ms"
	I0318 13:10:55.131763       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.301µs"
	I0318 13:13:06.845792       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-9twqb"
	I0318 13:13:06.861889       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="31.524957ms"
	I0318 13:13:06.885695       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="23.751918ms"
	I0318 13:13:06.886171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="405.902µs"
	I0318 13:13:16.648153       1 event.go:307] "Event occurred" object="multinode-894400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-894400-m02 event: Removing Node multinode-894400-m02 from Controller"
	E0318 13:13:21.399862       1 gc_controller.go:153] "Failed to get node" err="node \"multinode-894400-m02\" not found" node="multinode-894400-m02"
	I0318 13:13:22.928528       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m02\" does not exist"
	I0318 13:13:22.929420       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8btgf" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-8btgf"
	I0318 13:13:22.938124       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m02" podCIDRs=["10.244.1.0/24"]
	I0318 13:13:23.365802       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="64.501µs"
	I0318 13:13:26.650294       1 event.go:307] "Event occurred" object="multinode-894400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m02 event: Registered Node multinode-894400-m02 in Controller"
	I0318 13:14:11.613089       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:14:11.647233       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="56.601µs"
	I0318 13:14:11.678531       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-8btgf" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-8btgf"
	I0318 13:14:20.469722       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="82.101µs"
	I0318 13:14:20.478904       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="72.9µs"
	I0318 13:14:20.502229       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="61.7µs"
	I0318 13:14:20.809395       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="151.101µs"
	I0318 13:14:20.821856       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="124.101µs"
	I0318 13:14:21.842193       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="10.843148ms"
	I0318 13:14:21.842263       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="35.4µs"
	
	
	==> kube-controller-manager [7aa5cf4ec378] <==
	I0318 12:51:21.064706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="11.788417ms"
	I0318 12:51:21.065229       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="82.401µs"
	I0318 12:55:05.793350       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m03\" does not exist"
	I0318 12:55:05.797095       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 12:55:05.823205       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zv9tv"
	I0318 12:55:05.835101       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m03" podCIDRs=["10.244.2.0/24"]
	I0318 12:55:05.835149       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-745w9"
	I0318 12:55:06.188986       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller"
	I0318 12:55:06.188988       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-894400-m03"
	I0318 12:55:23.671742       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:02:46.325539       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:02:46.325935       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-894400-m03 status is now: NodeNotReady"
	I0318 13:02:46.344510       1 event.go:307] "Event occurred" object="kube-system/kindnet-zv9tv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:02:46.368811       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-745w9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:05:19.649225       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:05:21.403124       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-894400-m03 event: Removing Node multinode-894400-m03 from Controller"
	I0318 13:05:25.832056       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-894400-m03\" does not exist"
	I0318 13:05:25.832348       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:05:25.841443       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-894400-m03" podCIDRs=["10.244.3.0/24"]
	I0318 13:05:26.404299       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-894400-m03 event: Registered Node multinode-894400-m03 in Controller"
	I0318 13:05:34.080951       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:07:11.961036       1 event.go:307] "Event occurred" object="multinode-894400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-894400-m03 status is now: NodeNotReady"
	I0318 13:07:11.961077       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-894400-m02"
	I0318 13:07:12.051526       1 event.go:307] "Event occurred" object="kube-system/kindnet-zv9tv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:07:12.098168       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-745w9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [163ccabc3882] <==
	I0318 13:09:50.786718       1 server_others.go:69] "Using iptables proxy"
	I0318 13:09:50.833991       1 node.go:141] Successfully retrieved node IP: 172.30.130.156
	I0318 13:09:50.913665       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:09:50.913704       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:09:50.924640       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:09:50.925588       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:09:50.926722       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:09:50.926981       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:09:50.938764       1 config.go:188] "Starting service config controller"
	I0318 13:09:50.949206       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:09:50.949220       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:09:50.953299       1 config.go:315] "Starting node config controller"
	I0318 13:09:50.979020       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:09:50.990249       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:09:50.958488       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:09:50.996356       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:09:51.051947       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [9335855aab63] <==
	I0318 12:47:42.888603       1 server_others.go:69] "Using iptables proxy"
	I0318 12:47:42.909658       1 node.go:141] Successfully retrieved node IP: 172.30.129.141
	I0318 12:47:42.965774       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:47:42.965824       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:47:42.983172       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:47:42.983221       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:47:42.983471       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:47:42.983484       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:47:42.987719       1 config.go:188] "Starting service config controller"
	I0318 12:47:42.987733       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:47:42.987775       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:47:42.987781       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:47:42.988298       1 config.go:315] "Starting node config controller"
	I0318 12:47:42.988306       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:47:43.088485       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:47:43.088594       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:47:43.088517       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [66ee8be9fada] <==
	I0318 13:09:45.699415       1 serving.go:348] Generated self-signed cert in-memory
	W0318 13:09:48.342100       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0318 13:09:48.342243       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 13:09:48.342324       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0318 13:09:48.342374       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:09:48.402495       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 13:09:48.402540       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:09:48.407228       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:09:48.409117       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:09:48.410197       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 13:09:48.410738       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:09:48.510577       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e4d42739ce0e] <==
	E0318 12:47:25.400820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 12:47:25.434442       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 12:47:25.434526       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 12:47:25.456878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 12:47:25.457121       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 12:47:25.744652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 12:47:25.744733       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 12:47:25.777073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 12:47:25.777145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 12:47:25.850949       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 12:47:25.850985       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 12:47:25.876908       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 12:47:25.877170       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 12:47:25.892072       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 12:47:25.892099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 12:47:25.988864       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 12:47:25.988912       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 12:47:26.044749       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 12:47:26.044834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 12:47:26.067659       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 12:47:26.068250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0318 12:47:28.178584       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:07:24.107367       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0318 13:07:24.107975       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0318 13:07:24.108193       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 18 13:10:43 multinode-894400 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:10:43 multinode-894400 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:10:43 multinode-894400 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:10:43 multinode-894400 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:10:43 multinode-894400 kubelet[1532]: I0318 13:10:43.056417    1532 scope.go:117] "RemoveContainer" containerID="c51f768a2f642fdffc6de67f101be5abd8bbaec83ef13011b47efab5aad27134"
	Mar 18 13:11:43 multinode-894400 kubelet[1532]: E0318 13:11:43.030545    1532 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:11:43 multinode-894400 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:11:43 multinode-894400 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:11:43 multinode-894400 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:11:43 multinode-894400 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:12:43 multinode-894400 kubelet[1532]: E0318 13:12:43.028101    1532 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:12:43 multinode-894400 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:12:43 multinode-894400 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:12:43 multinode-894400 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:12:43 multinode-894400 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:13:43 multinode-894400 kubelet[1532]: E0318 13:13:43.029159    1532 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:13:43 multinode-894400 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:13:43 multinode-894400 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:13:43 multinode-894400 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:13:43 multinode-894400 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:14:43 multinode-894400 kubelet[1532]: E0318 13:14:43.030604    1532 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:14:43 multinode-894400 kubelet[1532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:14:43 multinode-894400 kubelet[1532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:14:43 multinode-894400 kubelet[1532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:14:43 multinode-894400 kubelet[1532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:14:50.215847    8324 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-894400 -n multinode-894400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-894400 -n multinode-894400: (11.3812905s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-894400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (550.64s)

                                                
                                    
x
+
TestKubernetesUpgrade (678.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-692300 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-692300 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (3m15.8433041s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-692300
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-692300: (34.0349314s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-692300 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-692300 status --format={{.Host}}: exit status 7 (2.400411s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:34:27.742484    9592 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-692300 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-692300 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (6m10.0845415s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-692300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "kubernetes-upgrade-692300" primary control-plane node in "kubernetes-upgrade-692300" cluster
	* Restarting existing hyperv VM for "kubernetes-upgrade-692300" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:34:30.149678    4548 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0318 13:34:30.228715    4548 out.go:291] Setting OutFile to fd 1560 ...
	I0318 13:34:30.229301    4548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:34:30.229301    4548 out.go:304] Setting ErrFile to fd 1564...
	I0318 13:34:30.229301    4548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:34:30.252607    4548 out.go:298] Setting JSON to false
	I0318 13:34:30.255586    4548 start.go:129] hostinfo: {"hostname":"minikube3","uptime":317447,"bootTime":1710451423,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0318 13:34:30.255586    4548 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 13:34:30.309053    4548 out.go:177] * [kubernetes-upgrade-692300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0318 13:34:30.450581    4548 notify.go:220] Checking for updates...
	I0318 13:34:30.470636    4548 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 13:34:30.712174    4548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:34:31.067423    4548 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0318 13:34:31.200650    4548 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 13:34:31.405686    4548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:34:31.460254    4548 config.go:182] Loaded profile config "kubernetes-upgrade-692300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0318 13:34:31.461356    4548 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:34:36.807948    4548 out.go:177] * Using the hyperv driver based on existing profile
	I0318 13:34:36.817573    4548 start.go:297] selected driver: hyperv
	I0318 13:34:36.817922    4548 start.go:901] validating driver "hyperv" against &{Name:kubernetes-upgrade-692300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-692300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.143.52 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:34:36.818206    4548 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:34:36.871533    4548 cni.go:84] Creating CNI manager for ""
	I0318 13:34:36.871533    4548 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 13:34:36.871533    4548 start.go:340] cluster config:
	{Name:kubernetes-upgrade-692300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-692300 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.30.143.52 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:34:36.872165    4548 iso.go:125] acquiring lock: {Name:mkefa7a887b9de67c22e0418da52288a5b488924 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:34:36.876406    4548 out.go:177] * Starting "kubernetes-upgrade-692300" primary control-plane node in "kubernetes-upgrade-692300" cluster
	I0318 13:34:36.879174    4548 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 13:34:36.879794    4548 preload.go:147] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0318 13:34:36.879847    4548 cache.go:56] Caching tarball of preloaded images
	I0318 13:34:36.880093    4548 preload.go:173] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0318 13:34:36.880355    4548 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0318 13:34:36.880540    4548 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubernetes-upgrade-692300\config.json ...
	I0318 13:34:36.882793    4548 start.go:360] acquireMachinesLock for kubernetes-upgrade-692300: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:37:50.674192    4548 start.go:364] duration metric: took 3m13.7899213s to acquireMachinesLock for "kubernetes-upgrade-692300"
	I0318 13:37:50.674387    4548 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:37:50.674387    4548 fix.go:54] fixHost starting: 
	I0318 13:37:50.675213    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:37:52.766730    4548 main.go:141] libmachine: [stdout =====>] : Off
	
	I0318 13:37:52.766809    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:37:52.766809    4548 fix.go:112] recreateIfNeeded on kubernetes-upgrade-692300: state=Stopped err=<nil>
	W0318 13:37:52.766809    4548 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:37:52.770966    4548 out.go:177] * Restarting existing hyperv VM for "kubernetes-upgrade-692300" ...
	I0318 13:37:52.773058    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM kubernetes-upgrade-692300
	I0318 13:37:56.027509    4548 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:37:56.027509    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:37:56.027509    4548 main.go:141] libmachine: Waiting for host to start...
	I0318 13:37:56.027509    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:37:58.312935    4548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:37:58.312935    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:37:58.312935    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-692300 ).networkadapters[0]).ipaddresses[0]
	I0318 13:38:00.854420    4548 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:38:00.854420    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:01.865638    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:38:04.219597    4548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:38:04.219860    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:04.219918    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-692300 ).networkadapters[0]).ipaddresses[0]
	I0318 13:38:06.731123    4548 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:38:06.731123    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:07.738623    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:38:09.886084    4548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:38:09.886084    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:09.886407    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-692300 ).networkadapters[0]).ipaddresses[0]
	I0318 13:38:12.361687    4548 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:38:12.361687    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:13.368194    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:38:15.544398    4548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:38:15.544969    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:15.545103    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-692300 ).networkadapters[0]).ipaddresses[0]
	I0318 13:38:18.028953    4548 main.go:141] libmachine: [stdout =====>] : 
	I0318 13:38:18.028953    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:19.040578    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:38:21.363523    4548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:38:21.363523    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:21.364337    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-692300 ).networkadapters[0]).ipaddresses[0]
	I0318 13:38:23.834847    4548 main.go:141] libmachine: [stdout =====>] : 172.30.132.125
	
	I0318 13:38:23.834847    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:23.838425    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:38:25.875840    4548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:38:25.876011    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:25.876144    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-692300 ).networkadapters[0]).ipaddresses[0]
	I0318 13:38:28.362155    4548 main.go:141] libmachine: [stdout =====>] : 172.30.132.125
	
	I0318 13:38:28.362205    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:28.362205    4548 profile.go:142] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubernetes-upgrade-692300\config.json ...
	I0318 13:38:28.364846    4548 machine.go:94] provisionDockerMachine start ...
	I0318 13:38:28.364846    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:38:30.420540    4548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:38:30.421490    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:30.421490    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-692300 ).networkadapters[0]).ipaddresses[0]
	I0318 13:38:32.924685    4548 main.go:141] libmachine: [stdout =====>] : 172.30.132.125
	
	I0318 13:38:32.924740    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:32.929728    4548 main.go:141] libmachine: Using SSH client type: native
	I0318 13:38:32.930096    4548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.132.125 22 <nil> <nil>}
	I0318 13:38:32.930096    4548 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:38:33.061469    4548 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:38:33.061469    4548 buildroot.go:166] provisioning hostname "kubernetes-upgrade-692300"
	I0318 13:38:33.061469    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:38:35.179189    4548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:38:35.179189    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:35.179460    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-692300 ).networkadapters[0]).ipaddresses[0]
	I0318 13:38:37.613281    4548 main.go:141] libmachine: [stdout =====>] : 172.30.132.125
	
	I0318 13:38:37.614022    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:37.618871    4548 main.go:141] libmachine: Using SSH client type: native
	I0318 13:38:37.619355    4548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.132.125 22 <nil> <nil>}
	I0318 13:38:37.619425    4548 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-692300 && echo "kubernetes-upgrade-692300" | sudo tee /etc/hostname
	I0318 13:38:37.769510    4548 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-692300
	
	I0318 13:38:37.769581    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:38:39.847070    4548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:38:39.847141    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:39.847245    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-692300 ).networkadapters[0]).ipaddresses[0]
	I0318 13:38:42.390729    4548 main.go:141] libmachine: [stdout =====>] : 172.30.132.125
	
	I0318 13:38:42.390729    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:42.395957    4548 main.go:141] libmachine: Using SSH client type: native
	I0318 13:38:42.397109    4548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.132.125 22 <nil> <nil>}
	I0318 13:38:42.397196    4548 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-692300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-692300/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-692300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:38:42.533153    4548 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:38:42.533153    4548 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0318 13:38:42.533153    4548 buildroot.go:174] setting up certificates
	I0318 13:38:42.533153    4548 provision.go:84] configureAuth start
	I0318 13:38:42.533153    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:38:44.786070    4548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:38:44.786377    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:44.786377    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-692300 ).networkadapters[0]).ipaddresses[0]
	I0318 13:38:47.430745    4548 main.go:141] libmachine: [stdout =====>] : 172.30.132.125
	
	I0318 13:38:47.430745    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:47.430745    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:38:49.643340    4548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:38:49.643340    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:49.643912    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-692300 ).networkadapters[0]).ipaddresses[0]
	I0318 13:38:52.207510    4548 main.go:141] libmachine: [stdout =====>] : 172.30.132.125
	
	I0318 13:38:52.207510    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:52.207510    4548 provision.go:143] copyHostCerts
	I0318 13:38:52.208852    4548 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0318 13:38:52.208852    4548 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0318 13:38:52.209306    4548 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0318 13:38:52.210947    4548 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0318 13:38:52.211018    4548 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0318 13:38:52.211381    4548 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0318 13:38:52.212698    4548 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0318 13:38:52.212738    4548 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0318 13:38:52.213097    4548 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0318 13:38:52.214293    4548 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-692300 san=[127.0.0.1 172.30.132.125 kubernetes-upgrade-692300 localhost minikube]
	I0318 13:38:52.427422    4548 provision.go:177] copyRemoteCerts
	I0318 13:38:52.440414    4548 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:38:52.441281    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:38:54.615152    4548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:38:54.616155    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:54.616218    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-692300 ).networkadapters[0]).ipaddresses[0]
	I0318 13:38:57.293842    4548 main.go:141] libmachine: [stdout =====>] : 172.30.132.125
	
	I0318 13:38:57.293842    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:57.294743    4548 sshutil.go:53] new ssh client: &{IP:172.30.132.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\kubernetes-upgrade-692300\id_rsa Username:docker}
	I0318 13:38:57.396128    4548 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9548104s)
	I0318 13:38:57.396638    4548 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:38:57.444851    4548 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0318 13:38:57.493297    4548 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:38:57.539102    4548 provision.go:87] duration metric: took 15.0058368s to configureAuth
	I0318 13:38:57.539102    4548 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:38:57.540114    4548 config.go:182] Loaded profile config "kubernetes-upgrade-692300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0318 13:38:57.540114    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:38:59.619942    4548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:38:59.620198    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:38:59.620198    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-692300 ).networkadapters[0]).ipaddresses[0]
	I0318 13:39:02.009934    4548 main.go:141] libmachine: [stdout =====>] : 172.30.132.125
	
	I0318 13:39:02.009934    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:39:02.016197    4548 main.go:141] libmachine: Using SSH client type: native
	I0318 13:39:02.016339    4548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.132.125 22 <nil> <nil>}
	I0318 13:39:02.016339    4548 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 13:39:02.146055    4548 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 13:39:02.146055    4548 buildroot.go:70] root file system type: tmpfs
	I0318 13:39:02.146322    4548 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 13:39:02.146322    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:39:04.167709    4548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:39:04.167709    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:39:04.167709    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-692300 ).networkadapters[0]).ipaddresses[0]
	I0318 13:39:06.579545    4548 main.go:141] libmachine: [stdout =====>] : 172.30.132.125
	
	I0318 13:39:06.579545    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:39:06.587273    4548 main.go:141] libmachine: Using SSH client type: native
	I0318 13:39:06.587273    4548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.132.125 22 <nil> <nil>}
	I0318 13:39:06.587273    4548 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 13:39:06.741918    4548 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 13:39:06.741918    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:39:08.758413    4548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:39:08.758413    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:39:08.759103    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-692300 ).networkadapters[0]).ipaddresses[0]
	I0318 13:39:11.176372    4548 main.go:141] libmachine: [stdout =====>] : 172.30.132.125
	
	I0318 13:39:11.176645    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:39:11.182961    4548 main.go:141] libmachine: Using SSH client type: native
	I0318 13:39:11.183477    4548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.132.125 22 <nil> <nil>}
	I0318 13:39:11.183477    4548 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 13:39:13.539880    4548 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 13:39:13.540007    4548 machine.go:97] duration metric: took 45.1747672s to provisionDockerMachine
	I0318 13:39:13.540061    4548 start.go:293] postStartSetup for "kubernetes-upgrade-692300" (driver="hyperv")
	I0318 13:39:13.540113    4548 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:39:13.552143    4548 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:39:13.552143    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:39:15.575654    4548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:39:15.575654    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:39:15.575843    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-692300 ).networkadapters[0]).ipaddresses[0]
	I0318 13:39:17.974635    4548 main.go:141] libmachine: [stdout =====>] : 172.30.132.125
	
	I0318 13:39:17.975290    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:39:17.975290    4548 sshutil.go:53] new ssh client: &{IP:172.30.132.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\kubernetes-upgrade-692300\id_rsa Username:docker}
	I0318 13:39:18.080683    4548 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5284537s)
	I0318 13:39:18.092959    4548 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:39:18.099300    4548 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:39:18.099300    4548 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0318 13:39:18.099300    4548 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0318 13:39:18.100889    4548 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem -> 134242.pem in /etc/ssl/certs
	I0318 13:39:18.113694    4548 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:39:18.130830    4548 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\134242.pem --> /etc/ssl/certs/134242.pem (1708 bytes)
	I0318 13:39:18.176166    4548 start.go:296] duration metric: took 4.6360709s for postStartSetup
	I0318 13:39:18.176234    4548 fix.go:56] duration metric: took 1m27.5011937s for fixHost
	I0318 13:39:18.176336    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:39:20.182618    4548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:39:20.182618    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:39:20.183129    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-692300 ).networkadapters[0]).ipaddresses[0]
	I0318 13:39:22.644688    4548 main.go:141] libmachine: [stdout =====>] : 172.30.132.125
	
	I0318 13:39:22.645153    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:39:22.650809    4548 main.go:141] libmachine: Using SSH client type: native
	I0318 13:39:22.651517    4548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.132.125 22 <nil> <nil>}
	I0318 13:39:22.651517    4548 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 13:39:22.778921    4548 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769162.780175809
	
	I0318 13:39:22.778996    4548 fix.go:216] guest clock: 1710769162.780175809
	I0318 13:39:22.778996    4548 fix.go:229] Guest: 2024-03-18 13:39:22.780175809 +0000 UTC Remote: 2024-03-18 13:39:18.1763079 +0000 UTC m=+288.123052301 (delta=4.603867909s)
	I0318 13:39:22.779122    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:39:24.806568    4548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:39:24.807358    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:39:24.807451    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-692300 ).networkadapters[0]).ipaddresses[0]
	I0318 13:39:27.236404    4548 main.go:141] libmachine: [stdout =====>] : 172.30.132.125
	
	I0318 13:39:27.236553    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:39:27.242262    4548 main.go:141] libmachine: Using SSH client type: native
	I0318 13:39:27.242733    4548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa89f80] 0xa8cb60 <nil>  [] 0s} 172.30.132.125 22 <nil> <nil>}
	I0318 13:39:27.242801    4548 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1710769162
	I0318 13:39:27.379614    4548 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 18 13:39:22 UTC 2024
	
	I0318 13:39:27.379614    4548 fix.go:236] clock set: Mon Mar 18 13:39:22 UTC 2024
	 (err=<nil>)
	I0318 13:39:27.379614    4548 start.go:83] releasing machines lock for "kubernetes-upgrade-692300", held for 1m36.7045914s
	I0318 13:39:27.379614    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:39:29.578149    4548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:39:29.578149    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:39:29.578149    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-692300 ).networkadapters[0]).ipaddresses[0]
	I0318 13:39:32.101060    4548 main.go:141] libmachine: [stdout =====>] : 172.30.132.125
	
	I0318 13:39:32.101579    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:39:32.105610    4548 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:39:32.105610    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:39:32.117679    4548 ssh_runner.go:195] Run: cat /version.json
	I0318 13:39:32.118365    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-692300 ).state
	I0318 13:39:34.311741    4548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:39:34.311813    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:39:34.311846    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-692300 ).networkadapters[0]).ipaddresses[0]
	I0318 13:39:34.316559    4548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:39:34.316559    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:39:34.316559    4548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-692300 ).networkadapters[0]).ipaddresses[0]
	I0318 13:39:36.985510    4548 main.go:141] libmachine: [stdout =====>] : 172.30.132.125
	
	I0318 13:39:36.985510    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:39:36.985510    4548 sshutil.go:53] new ssh client: &{IP:172.30.132.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\kubernetes-upgrade-692300\id_rsa Username:docker}
	I0318 13:39:37.008480    4548 main.go:141] libmachine: [stdout =====>] : 172.30.132.125
	
	I0318 13:39:37.008480    4548 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:39:37.008480    4548 sshutil.go:53] new ssh client: &{IP:172.30.132.125 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\kubernetes-upgrade-692300\id_rsa Username:docker}
	I0318 13:39:37.075860    4548 ssh_runner.go:235] Completed: cat /version.json: (4.9576028s)
	I0318 13:39:37.088793    4548 ssh_runner.go:195] Run: systemctl --version
	I0318 13:39:37.156184    4548 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0505367s)
	I0318 13:39:37.169549    4548 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:39:37.177514    4548 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:39:37.195084    4548 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0318 13:39:37.226157    4548 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0318 13:39:37.258751    4548 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:39:37.258751    4548 start.go:494] detecting cgroup driver to use...
	I0318 13:39:37.258751    4548 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:39:37.307507    4548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0318 13:39:37.339300    4548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 13:39:37.358069    4548 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 13:39:37.369039    4548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 13:39:37.399197    4548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 13:39:37.427226    4548 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 13:39:37.463757    4548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 13:39:37.494184    4548 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:39:37.523088    4548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 13:39:37.554097    4548 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:39:37.585009    4548 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:39:37.623683    4548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:39:37.821543    4548 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 13:39:37.856755    4548 start.go:494] detecting cgroup driver to use...
	I0318 13:39:37.874214    4548 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 13:39:37.911375    4548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:39:37.941912    4548 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:39:37.987293    4548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:39:38.032040    4548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 13:39:38.073152    4548 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 13:39:38.138060    4548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 13:39:38.160366    4548 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:39:38.209121    4548 ssh_runner.go:195] Run: which cri-dockerd
	I0318 13:39:38.226679    4548 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 13:39:38.244178    4548 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 13:39:38.286981    4548 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 13:39:38.471590    4548 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 13:39:38.659289    4548 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 13:39:38.659603    4548 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 13:39:38.705915    4548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:39:38.890764    4548 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 13:40:40.027750    4548 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1365272s)
	I0318 13:40:40.041604    4548 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0318 13:40:40.078742    4548 out.go:177] 
	W0318 13:40:40.081253    4548 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Mar 18 13:39:11 kubernetes-upgrade-692300 systemd[1]: Starting Docker Application Container Engine...
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:11.767746276Z" level=info msg="Starting up"
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:11.768882185Z" level=info msg="containerd not running, starting managed containerd"
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:11.769878193Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=665
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.800073731Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.823507115Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.823617716Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.823788318Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.823807618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.825279129Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.825366730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.825596932Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.825749633Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.825770333Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.825781933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.826456839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.827494447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.829983166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.830074867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.830220468Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.830305769Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.830841273Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.830881174Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.830895974Z" level=info msg="metadata content store policy set" policy=shared
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.834812105Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.834911605Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.834934005Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.834958006Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.834988706Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835053006Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835289808Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835407509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835504710Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835522310Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835535210Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835551510Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835593111Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835609311Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835639111Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835650611Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835662211Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835734512Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835758212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835771812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835784412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835796412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835815212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835828613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835839013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835850413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835861513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835874113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835884513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835895013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835905813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835923013Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835941813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835953213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835978814Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836035614Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836055914Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836066014Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836075514Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836140415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836175815Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836186915Z" level=info msg="NRI interface is disabled by configuration."
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836379317Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836448617Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836489018Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836505718Z" level=info msg="containerd successfully booted in 0.038846s"
	Mar 18 13:39:12 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:12.826681417Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 18 13:39:12 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:12.954067258Z" level=info msg="Loading containers: start."
	Mar 18 13:39:13 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:13.377783830Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 18 13:39:13 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:13.459106684Z" level=info msg="Loading containers: done."
	Mar 18 13:39:13 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:13.485779797Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	Mar 18 13:39:13 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:13.486541606Z" level=info msg="Daemon has completed initialization"
	Mar 18 13:39:13 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:13.539327825Z" level=info msg="API listen on [::]:2376"
	Mar 18 13:39:13 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:13.539447826Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 18 13:39:13 kubernetes-upgrade-692300 systemd[1]: Started Docker Application Container Engine.
	Mar 18 13:39:38 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:38.917044456Z" level=info msg="Processing signal 'terminated'"
	Mar 18 13:39:38 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:38.918721764Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Mar 18 13:39:38 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:38.919294166Z" level=info msg="Daemon shutdown complete"
	Mar 18 13:39:38 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:38.919477667Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Mar 18 13:39:38 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:38.919751469Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Mar 18 13:39:38 kubernetes-upgrade-692300 systemd[1]: Stopping Docker Application Container Engine...
	Mar 18 13:39:39 kubernetes-upgrade-692300 systemd[1]: docker.service: Deactivated successfully.
	Mar 18 13:39:39 kubernetes-upgrade-692300 systemd[1]: Stopped Docker Application Container Engine.
	Mar 18 13:39:39 kubernetes-upgrade-692300 systemd[1]: Starting Docker Application Container Engine...
	Mar 18 13:39:40 kubernetes-upgrade-692300 dockerd[1119]: time="2024-03-18T13:39:40.003235870Z" level=info msg="Starting up"
	Mar 18 13:40:40 kubernetes-upgrade-692300 dockerd[1119]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Mar 18 13:40:40 kubernetes-upgrade-692300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Mar 18 13:40:40 kubernetes-upgrade-692300 systemd[1]: docker.service: Failed with result 'exit-code'.
	Mar 18 13:40:40 kubernetes-upgrade-692300 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Mar 18 13:39:11 kubernetes-upgrade-692300 systemd[1]: Starting Docker Application Container Engine...
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:11.767746276Z" level=info msg="Starting up"
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:11.768882185Z" level=info msg="containerd not running, starting managed containerd"
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:11.769878193Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=665
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.800073731Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.823507115Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.823617716Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.823788318Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.823807618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.825279129Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.825366730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.825596932Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.825749633Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.825770333Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.825781933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.826456839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.827494447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.829983166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.830074867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.830220468Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.830305769Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.830841273Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.830881174Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.830895974Z" level=info msg="metadata content store policy set" policy=shared
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.834812105Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.834911605Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.834934005Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.834958006Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.834988706Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835053006Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835289808Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835407509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835504710Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835522310Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835535210Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835551510Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835593111Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835609311Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835639111Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835650611Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835662211Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835734512Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835758212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835771812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835784412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835796412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835815212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835828613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835839013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835850413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835861513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835874113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835884513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835895013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835905813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835923013Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835941813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835953213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.835978814Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836035614Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836055914Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836066014Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836075514Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836140415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836175815Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836186915Z" level=info msg="NRI interface is disabled by configuration."
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836379317Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836448617Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836489018Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Mar 18 13:39:11 kubernetes-upgrade-692300 dockerd[665]: time="2024-03-18T13:39:11.836505718Z" level=info msg="containerd successfully booted in 0.038846s"
	Mar 18 13:39:12 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:12.826681417Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 18 13:39:12 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:12.954067258Z" level=info msg="Loading containers: start."
	Mar 18 13:39:13 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:13.377783830Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 18 13:39:13 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:13.459106684Z" level=info msg="Loading containers: done."
	Mar 18 13:39:13 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:13.485779797Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	Mar 18 13:39:13 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:13.486541606Z" level=info msg="Daemon has completed initialization"
	Mar 18 13:39:13 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:13.539327825Z" level=info msg="API listen on [::]:2376"
	Mar 18 13:39:13 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:13.539447826Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 18 13:39:13 kubernetes-upgrade-692300 systemd[1]: Started Docker Application Container Engine.
	Mar 18 13:39:38 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:38.917044456Z" level=info msg="Processing signal 'terminated'"
	Mar 18 13:39:38 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:38.918721764Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Mar 18 13:39:38 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:38.919294166Z" level=info msg="Daemon shutdown complete"
	Mar 18 13:39:38 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:38.919477667Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Mar 18 13:39:38 kubernetes-upgrade-692300 dockerd[658]: time="2024-03-18T13:39:38.919751469Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Mar 18 13:39:38 kubernetes-upgrade-692300 systemd[1]: Stopping Docker Application Container Engine...
	Mar 18 13:39:39 kubernetes-upgrade-692300 systemd[1]: docker.service: Deactivated successfully.
	Mar 18 13:39:39 kubernetes-upgrade-692300 systemd[1]: Stopped Docker Application Container Engine.
	Mar 18 13:39:39 kubernetes-upgrade-692300 systemd[1]: Starting Docker Application Container Engine...
	Mar 18 13:39:40 kubernetes-upgrade-692300 dockerd[1119]: time="2024-03-18T13:39:40.003235870Z" level=info msg="Starting up"
	Mar 18 13:40:40 kubernetes-upgrade-692300 dockerd[1119]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Mar 18 13:40:40 kubernetes-upgrade-692300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Mar 18 13:40:40 kubernetes-upgrade-692300 systemd[1]: docker.service: Failed with result 'exit-code'.
	Mar 18 13:40:40 kubernetes-upgrade-692300 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0318 13:40:40.081253    4548 out.go:239] * 
	* 
	W0318 13:40:40.082869    4548 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:40:40.085955    4548 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-692300 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv : exit status 90
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-692300 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-692300 version --output=json: exit status 1 (156.2084ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-692300" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-03-18 13:40:40.4313131 +0000 UTC m=+9421.751837701
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-692300 -n kubernetes-upgrade-692300
E0318 13:40:42.118730   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-692300 -n kubernetes-upgrade-692300: exit status 6 (11.7711958s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:40:40.545409    7804 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0318 13:40:52.137683    7804 status.go:417] kubeconfig endpoint: get endpoint: "kubernetes-upgrade-692300" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-692300" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-692300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-692300
E0318 13:41:13.084302   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-692300: (1m3.8694253s)
--- FAIL: TestKubernetesUpgrade (678.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (302.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-692300 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-692300 --driver=hyperv: exit status 1 (4m59.658624s)

                                                
                                                
-- stdout --
	* [NoKubernetes-692300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-692300" primary control-plane node in "NoKubernetes-692300" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:30:38.263931    7360 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-692300 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-692300 -n NoKubernetes-692300
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-692300 -n NoKubernetes-692300: exit status 7 (2.5553034s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:35:37.897582    3992 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-692300" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (302.22s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (10800.598s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-276100 --alsologtostderr -v=5
panic: test timed out after 3h0m0s
running tests:
	TestNetworkPlugins (21m43s)
	TestNetworkPlugins/group/auto (8m27s)
	TestNetworkPlugins/group/calico (3m9s)
	TestNetworkPlugins/group/calico/Start (3m9s)
	TestNetworkPlugins/group/kindnet (3m39s)
	TestNetworkPlugins/group/kindnet/Start (3m39s)
	TestPause (14m9s)
	TestPause/serial (14m9s)
	TestPause/serial/PauseAgain (3s)
	TestStartStop (17m39s)

                                                
                                                
goroutine 2471 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 4 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0004ae9c0, 0xc000659bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0008582a0, {0x44944c0, 0x2a, 0x2a}, {0x21f612e?, 0x1081af?, 0x44b6ca0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc00062f860)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc00062f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 7 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000516680)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 951 [chan receive, 149 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002e4ac40, 0xc000148ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 878
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2396 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x7ffda8404de0?, {0xc000677bd0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x374, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc000aed380)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000928b00)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000928b00)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0024d1ba0, 0xc000928b00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0024d1ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc0024d1ba0, 0xc002dcac00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2238
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 16 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 25
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 64 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000974990, 0x3c)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1cb5a80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000cca660)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009749c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000067c00, {0x314b460, 0xc00097d740}, 0x1, 0xc000148ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000067c00, 0x3b9aca00, 0x0, 0x1, 0xc000148ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 144
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 178 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 65
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2238 [chan receive, 4 minutes]:
testing.(*T).Run(0xc00238d6c0, {0x219b874?, 0x31440a0?}, 0xc002dcac00)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00238d6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc00238d6c0, 0xc000146a00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2229
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2251 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0006b97c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00238dd40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00238dd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00238dd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00238dd40, 0xc0006ac240)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2249
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 65 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x316dce0, 0xc000148ae0}, 0xc002223f50, 0xc002223f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x316dce0, 0xc000148ae0}, 0xa0?, 0xc002223f50, 0xc002223f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x316dce0?, 0xc000148ae0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002223fd0?, 0x1de6e4?, 0xc000848cf0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 144
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2393 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x80?, {0xc002d73b20?, 0x67f45?, 0x4544100?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x4465180?, 0xc002d73b80?, 0x5fe76?, 0x4544100?, 0xc002d73c08?, 0x52a45?, 0x1af1a690598?, 0x4d?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x7d4, {0xc000d08a2e?, 0x5d2, 0x1042bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000d79688?, {0xc000d08a2e?, 0x8c25e?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000d79688, {0xc000d08a2e, 0x5d2, 0x5d2})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000b060a0, {0xc000d08a2e?, 0xc0006a4c40?, 0x22d?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002dcab40, {0x314a020, 0xc000682498})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x314a160, 0xc002dcab40}, {0x314a020, 0xc000682498}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc002d73e78?, {0x314a160, 0xc002dcab40})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4448a20?, {0x314a160?, 0xc002dcab40?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x314a160, 0xc002dcab40}, {0x314a0e0, 0xc000b060a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002dc6540?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2392
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2049 [chan receive, 22 minutes]:
testing.(*T).Run(0xc0004ae4e0, {0x219b86f?, 0xbf56d?}, 0xc002914090)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0004ae4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0004ae4e0, 0x2bfecf0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 143 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000cca780)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 152
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 144 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009749c0, 0xc000148ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 152
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1333 [chan send, 142 minutes]:
os/exec.(*Cmd).watchCtx(0xc002bdac60, 0xc002bcad20)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 833
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2255 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0006b97c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0024d1040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0024d1040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0024d1040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0024d1040, 0xc0006ac380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2249
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2252 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0006b97c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00012da00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00012da00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00012da00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00012da00, 0xc0006ac280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2249
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2233 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc0006b97c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000edd40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000edd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000edd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0000edd40, 0xc000146700)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2229
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2237 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc0006b97c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00238d520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00238d520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00238d520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00238d520, 0xc000146980)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2229
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2249 [chan receive, 18 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00238d860, 0x2bfef10)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2129
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 733 [IO wait, 159 minutes]:
internal/poll.runtime_pollWait(0x1af5fb6d100, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0000a9c08?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc00015cf20, 0xc002551bb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc00015cf08, 0x38c, {0xc00022d0e0?, 0x0?, 0x0?}, 0xc0000a9808?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc00015cf08, 0xc002551d90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc00015cf08)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc0000aaac0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0000aaac0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0004bc0f0, {0x3161830, 0xc0000aaac0})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0004bc0f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0026c84e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 730
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2254 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0006b97c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00012dd40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00012dd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00012dd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00012dd40, 0xc0006ac300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2249
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2129 [chan receive, 18 minutes]:
testing.(*T).Run(0xc0004af1e0, {0x219b86f?, 0x197613?}, 0x2bfef10)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0004af1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0004af1e0, 0x2bfed38)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2405 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2404
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2314 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0006ac780, 0xc000148ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2420
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2253 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0006b97c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00012dba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00012dba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00012dba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00012dba0, 0xc0006ac2c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2249
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2099 [chan receive, 14 minutes]:
testing.(*T).Run(0xc0004aeb60, {0x219cd73?, 0xd18c2e2800?}, 0xc002dca030)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause(0xc0004aeb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:41 +0x159
testing.tRunner(0xc0004aeb60, 0x2bfed08)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2232 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc0006b97c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000edba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000edba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000edba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0000edba0, 0xc000146600)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2229
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2236 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc0006b97c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00238d380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00238d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00238d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00238d380, 0xc000146880)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2229
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2231 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc0006b97c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000ed380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000ed380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000ed380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0000ed380, 0xc000146580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2229
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 950 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00246ce40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 878
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2250 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0006b97c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00238dba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00238dba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00238dba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00238dba0, 0xc0006ac200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2249
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2235 [chan receive, 4 minutes]:
testing.(*T).Run(0xc00238d1e0, {0x219b874?, 0x31440a0?}, 0xc002dca7b0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00238d1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc00238d1e0, 0xc000146800)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2229
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2234 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc0006b97c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00238c680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00238c680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00238c680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00238c680, 0xc000146780)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2229
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2229 [chan receive, 22 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0000ec340, 0xc002914090)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2049
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 941 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc002e4ac10, 0x35)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1cb5a80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00246cd20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002e4ac40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0022174b0, {0x314b460, 0xc0032a7d10}, 0x1, 0xc000148ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0022174b0, 0x3b9aca00, 0x0, 0x1, 0xc000148ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 951
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 942 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x316dce0, 0xc000148ae0}, 0xc00258bf50, 0xc00258bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x316dce0, 0xc000148ae0}, 0x0?, 0xc00258bf50, 0xc00258bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x316dce0?, 0xc000148ae0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 951
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 943 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 942
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2486 [syscall, locked to thread]:
syscall.SyscallN(0xc0023ad800?, {0xc000655b20?, 0x67f45?, 0x4544100?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0006f6280?, 0xc000655b80?, 0x5fe76?, 0x4544100?, 0xc000655c08?, 0x52a45?, 0x1af1a690a28?, 0xc0028e1f41?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x794, {0xc00003413a?, 0x2c6, 0xc000034000?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00230e508?, {0xc00003413a?, 0x0?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00230e508, {0xc00003413a, 0x2c6, 0x2c6})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000b06060, {0xc00003413a?, 0x1af5fe41cc8?, 0x13a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00258c000, {0x314a020, 0xc000b06008})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x314a160, 0xc00258c000}, {0x314a020, 0xc000b06008}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000655e78?, {0x314a160, 0xc00258c000})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4448a20?, {0x314a160?, 0xc00258c000?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x314a160, 0xc00258c000}, {0x314a0e0, 0xc000b06060}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0025b2d80?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2230
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2319 [IO wait]:
internal/poll.runtime_pollWait(0x1af5fb6d1f8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0x80789139f752c6db?, 0x2783792b4fda82a5?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc00079c2a0, 0x2bff8e8)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).Read(0xc00079c288, {0xc000b80000, 0x2000, 0x2000})
	/usr/local/go/src/internal/poll/fd_windows.go:436 +0x2b1
net.(*netFD).Read(0xc00079c288, {0xc000b80000?, 0xc00015e3c0?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0000a6838, {0xc000b80000?, 0xc000b80005?, 0x22?})
	/usr/local/go/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc0008587c8, {0xc000b80000?, 0x0?, 0xc0008587c8?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc002b3c2b0, {0x314bbc0, 0xc0008587c8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc002b3c008, {0x1af5fdd38a0, 0xc000009c20}, 0xc000bc7960?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc002b3c008, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc002b3c008, {0xc000630000, 0x1000, 0x50da5?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc000be2300, {0xc0024b0900, 0x9, 0xc000bc7cf8?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x314a200, 0xc000be2300}, {0xc0024b0900, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0024b0900, 0x9, 0x5ed7a5?}, {0x314a200?, 0xc000be2300?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.22.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0024b08c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.22.0/http2/frame.go:498 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000bc7fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.22.0/http2/transport.go:2275 +0x12c
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000254480)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.22.0/http2/transport.go:2170 +0x65
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2318
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.22.0/http2/transport.go:821 +0xca6

                                                
                                                
goroutine 2488 [syscall, locked to thread]:
syscall.SyscallN(0x7?, {0xc00231fb20?, 0x67f45?, 0x4544100?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x6d?, 0xc00231fb80?, 0x5fe76?, 0x4544100?, 0xc00231fc08?, 0x52a45?, 0x1af1a690108?, 0xc00257d735?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x620, {0xc000691400?, 0x200, 0xc000691400?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00230ec88?, {0xc000691400?, 0x8c25e?, 0x200?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00230ec88, {0xc000691400, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000682290, {0xc000691400?, 0x1af5fe832a8?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00258c150, {0x314a020, 0xc0007fc318})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x314a160, 0xc00258c150}, {0x314a020, 0xc0007fc318}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00231fe78?, {0x314a160, 0xc00258c150})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4448a20?, {0x314a160?, 0xc00258c150?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x314a160, 0xc00258c150}, {0x314a0e0, 0xc000682290}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0006b3500?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2487
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2230 [syscall, locked to thread]:
syscall.SyscallN(0x7ffda8404de0?, {0xc000bed108?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x0?, 0x1?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x630, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc000d10690)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000198000)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000198000)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
os/exec.(*Cmd).CombinedOutput(0xc000198000)
	/usr/local/go/src/os/exec/exec.go:1012 +0x85
k8s.io/minikube/test/integration.debugLogs(0xc0000ec4e0, {0xc0007827b0, 0xb})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:374 +0x2765
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000ec4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:211 +0xbcc
testing.tRunner(0xc0000ec4e0, 0xc000146200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2229
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1057 [chan send, 147 minutes]:
os/exec.(*Cmd).watchCtx(0xc0028cd340, 0xc0006b2000)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1056
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2490 [select]:
os/exec.(*Cmd).watchCtx(0xc000198160, 0xc000b1e1e0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2487
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2404 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x316dce0, 0xc000148ae0}, 0xc002587f50, 0xc002587f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x316dce0, 0xc000148ae0}, 0x80?, 0xc002587f50, 0xc002587f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x316dce0?, 0xc000148ae0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x1de685?, 0xc000b1c160?, 0xc0025b2180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2314
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2395 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc0009289a0, 0xc000d0c2a0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2392
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2394 [syscall, locked to thread]:
syscall.SyscallN(0xc002221b10?, {0xc002221b20?, 0x67f45?, 0x44c3b60?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x100000000052db9?, 0xc002221b80?, 0x5fe76?, 0x4544100?, 0xc002221c08?, 0x52a45?, 0x0?, 0x8000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x65c, {0xc000bf4987?, 0x1679, 0x1042bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000d6b408?, {0xc000bf4987?, 0x2020205d3737313a?, 0x8000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000d6b408, {0xc000bf4987, 0x1679, 0x1679})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000b060c0, {0xc000bf4987?, 0xc002221d98?, 0x3e40?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002dcabd0, {0x314a020, 0xc0001442d0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x314a160, 0xc002dcabd0}, {0x314a020, 0xc0001442d0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x314a160, 0xc002dcabd0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4448a20?, {0x314a160?, 0xc002dcabd0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x314a160, 0xc002dcabd0}, {0x314a0e0, 0xc000b060c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0025b2240?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2392
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2392 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x7ffda8404de0?, {0xc002389bd0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x7c8, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc000aecd50)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0009289a0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0009289a0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0024d16c0, 0xc0009289a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0024d16c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc0024d16c0, 0xc002dca7b0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2235
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2348 [chan receive]:
testing.(*T).Run(0xc0024d1520, {0x21a62d7?, 0x24?}, 0xc002e4a340)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause.func1(0xc0024d1520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:65 +0x1ee
testing.tRunner(0xc0024d1520, 0xc002dca030)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2099
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2487 [syscall, locked to thread]:
syscall.SyscallN(0x7ffda8404de0?, {0xc0023b5ab0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x738, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc000d10c60)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000198160)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000198160)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0026c8680, 0xc000198160)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validatePause({0x316db20?, 0xc00026a000?}, 0xc0026c8680, {0xc002192090?, 0xc00cf811e4?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:110 +0x16f
k8s.io/minikube/test/integration.TestPause.func1.1(0xc0026c8680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:66 +0x43
testing.tRunner(0xc0026c8680, 0xc002e4a340)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2348
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2399 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc000928b00, 0xc000d0c420)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2396
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2489 [runnable]:
internal/poll.(*FD).Read(0xc00230f188, {0xc00097b4c7, 0x339, 0x339})
	/usr/local/go/src/internal/poll/fd_windows.go:446 +0x318
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006822c8, {0xc00097b4c7?, 0xc0025bbd98?, 0x22d?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00258c180, {0x314a020, 0xc000b06028})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x314a160, 0xc00258c180}, {0x314a020, 0xc000b06028}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x314a160, 0xc00258c180})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4448a20?, {0x314a160?, 0xc00258c180?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x314a160, 0xc00258c180}, {0x314a0e0, 0xc0006822c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000d15800?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2487
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2398 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc0029c3b20?, 0x67f45?, 0x4544100?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc00025444d?, 0xc0029c3b80?, 0x5fe76?, 0x4544100?, 0xc0029c3c08?, 0x52a45?, 0x1af1a690108?, 0x67?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x680, {0xc000bdfc96?, 0x36a, 0x1042bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000d61408?, {0xc000bdfc96?, 0x8c25e?, 0x2000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000d61408, {0xc000bdfc96, 0x36a, 0x36a})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000b06178, {0xc000bdfc96?, 0xc002712e00?, 0x1000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002dcaf00, {0x314a020, 0xc0006824f8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x314a160, 0xc002dcaf00}, {0x314a020, 0xc0006824f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0029c3e78?, {0x314a160, 0xc002dcaf00})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4448a20?, {0x314a160?, 0xc002dcaf00?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x314a160, 0xc002dcaf00}, {0x314a0e0, 0xc000b06178}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0025b21e0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2396
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2397 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x4541c20?, {0xc002967b20?, 0xc000144070?, 0x528db?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x2000?, 0x0?, 0xe2d15c?, 0x3146c00?, 0xc002967c08?, 0x528db?, 0x48c66?, 0xc0028e1f80?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6f0, {0xc000035dec?, 0x214, 0x1042bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00015d408?, {0xc000035dec?, 0x8c211?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00015d408, {0xc000035dec, 0x214, 0x214})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000b06148, {0xc000035dec?, 0xc002967d98?, 0x69?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002dcade0, {0x314a020, 0xc0001442f0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x314a160, 0xc002dcade0}, {0x314a020, 0xc0001442f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x314a160, 0xc002dcade0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4448a20?, {0x314a160?, 0xc002dcade0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x314a160, 0xc002dcade0}, {0x314a0e0, 0xc000b06148}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000d0c120?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2396
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2403 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0006ac750, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1cb5a80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0028e0f00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0006ac780)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000ce6000, {0x314b460, 0xc000256d20}, 0x1, 0xc000148ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000ce6000, 0x3b9aca00, 0x0, 0x1, 0xc000148ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2314
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2313 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0028e1080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2420
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                    

Test pass (164/210)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 17.9
4 TestDownloadOnly/v1.20.0/preload-exists 0.07
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 1.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.37
12 TestDownloadOnly/v1.28.4/json-events 13.88
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.46
18 TestDownloadOnly/v1.28.4/DeleteAll 1.34
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 1.16
21 TestDownloadOnly/v1.29.0-rc.2/json-events 13.04
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.29
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 1.35
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 1.32
30 TestBinaryMirror 7.1
31 TestOffline 418.87
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.31
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.31
36 TestAddons/Setup 389.05
39 TestAddons/parallel/Ingress 85.04
40 TestAddons/parallel/InspektorGadget 27.85
41 TestAddons/parallel/MetricsServer 22.17
42 TestAddons/parallel/HelmTiller 30.05
44 TestAddons/parallel/CSI 106.03
45 TestAddons/parallel/Headlamp 51.84
46 TestAddons/parallel/CloudSpanner 22.4
47 TestAddons/parallel/LocalPath 85.15
48 TestAddons/parallel/NvidiaDevicePlugin 21.54
49 TestAddons/parallel/Yakd 5.03
52 TestAddons/serial/GCPAuth/Namespaces 0.33
53 TestAddons/StoppedEnableDisable 52.61
54 TestCertOptions 549.53
55 TestCertExpiration 870.28
56 TestDockerFlags 427.19
57 TestForceSystemdFlag 502.8
58 TestForceSystemdEnv 438.04
65 TestErrorSpam/start 16.79
66 TestErrorSpam/status 35.38
67 TestErrorSpam/pause 26.29
68 TestErrorSpam/unpause 162.71
69 TestErrorSpam/stop 88.87
72 TestFunctional/serial/CopySyncFile 0.03
73 TestFunctional/serial/StartWithProxy 203.31
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 123.2
76 TestFunctional/serial/KubeContext 0.13
77 TestFunctional/serial/KubectlGetPods 0.23
80 TestFunctional/serial/CacheCmd/cache/add_remote 24.76
81 TestFunctional/serial/CacheCmd/cache/add_local 10.08
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.25
83 TestFunctional/serial/CacheCmd/cache/list 0.26
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 8.67
85 TestFunctional/serial/CacheCmd/cache/cache_reload 33.81
86 TestFunctional/serial/CacheCmd/cache/delete 0.52
87 TestFunctional/serial/MinikubeKubectlCmd 0.43
89 TestFunctional/serial/ExtraConfig 118.09
90 TestFunctional/serial/ComponentHealth 0.18
91 TestFunctional/serial/LogsCmd 7.85
92 TestFunctional/serial/LogsFileCmd 9.8
93 TestFunctional/serial/InvalidService 19.49
99 TestFunctional/parallel/StatusCmd 41.29
103 TestFunctional/parallel/ServiceCmdConnect 44.26
104 TestFunctional/parallel/AddonsCmd 0.83
105 TestFunctional/parallel/PersistentVolumeClaim 42.08
107 TestFunctional/parallel/SSHCmd 19.99
108 TestFunctional/parallel/CpCmd 58.41
109 TestFunctional/parallel/MySQL 68.05
110 TestFunctional/parallel/FileSync 10.86
111 TestFunctional/parallel/CertSync 65.27
115 TestFunctional/parallel/NodeLabels 0.22
117 TestFunctional/parallel/NonActiveRuntimeDisabled 12.49
119 TestFunctional/parallel/License 3.77
120 TestFunctional/parallel/Version/short 0.3
121 TestFunctional/parallel/Version/components 9.55
122 TestFunctional/parallel/ImageCommands/ImageListShort 7.73
123 TestFunctional/parallel/ImageCommands/ImageListTable 7.41
124 TestFunctional/parallel/ImageCommands/ImageListJson 7.51
125 TestFunctional/parallel/ImageCommands/ImageListYaml 7.45
126 TestFunctional/parallel/ImageCommands/ImageBuild 26.42
127 TestFunctional/parallel/ImageCommands/Setup 4.63
128 TestFunctional/parallel/DockerEnv/powershell 46.62
129 TestFunctional/parallel/UpdateContextCmd/no_changes 2.62
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.69
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.6
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 25.73
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 20.93
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 26.97
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 9.89
136 TestFunctional/parallel/ImageCommands/ImageRemove 16.91
138 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 9.1
139 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 26.69
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 16.83
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 8.63
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
150 TestFunctional/parallel/ServiceCmd/DeployApp 9.41
151 TestFunctional/parallel/ProfileCmd/profile_not_create 10.73
152 TestFunctional/parallel/ServiceCmd/List 13.86
153 TestFunctional/parallel/ProfileCmd/profile_list 11.06
154 TestFunctional/parallel/ServiceCmd/JSONOutput 14.63
155 TestFunctional/parallel/ProfileCmd/profile_json_output 12.19
159 TestFunctional/delete_addon-resizer_images 0.48
160 TestFunctional/delete_my-image_image 0.18
161 TestFunctional/delete_minikube_cached_images 0.18
165 TestMultiControlPlane/serial/StartCluster 680.34
166 TestMultiControlPlane/serial/DeployApp 13.04
168 TestMultiControlPlane/serial/AddWorkerNode 239.11
169 TestMultiControlPlane/serial/NodeLabels 0.2
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 27.16
171 TestMultiControlPlane/serial/CopyFile 627.68
172 TestMultiControlPlane/serial/StopSecondaryNode 70.55
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 20.15
177 TestImageBuild/serial/Setup 191.61
178 TestImageBuild/serial/NormalBuild 9.18
179 TestImageBuild/serial/BuildWithBuildArg 8.9
180 TestImageBuild/serial/BuildWithDockerIgnore 7.82
181 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.55
185 TestJSONOutput/start/Command 234.21
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 7.61
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 7.42
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 33.77
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 1.45
213 TestMainNoArgs 0.23
214 TestMinikubeProfile 509.09
217 TestMountStart/serial/StartWithMountFirst 141.21
218 TestMountStart/serial/VerifyMountFirst 8.82
219 TestMountStart/serial/StartWithMountSecond 142.92
220 TestMountStart/serial/VerifyMountSecond 9.18
221 TestMountStart/serial/DeleteFirst 26.3
222 TestMountStart/serial/VerifyMountPostDelete 8.89
223 TestMountStart/serial/Stop 25.12
224 TestMountStart/serial/RestartStopped 111.66
225 TestMountStart/serial/VerifyMountPostStop 9.19
228 TestMultiNode/serial/FreshStart2Nodes 406.26
229 TestMultiNode/serial/DeployApp2Nodes 9.09
231 TestMultiNode/serial/AddNode 217.97
232 TestMultiNode/serial/MultiNodeLabels 0.18
233 TestMultiNode/serial/ProfileList 11.8
234 TestMultiNode/serial/CopyFile 346.78
235 TestMultiNode/serial/StopNode 73.68
236 TestMultiNode/serial/StartAfterStop 177.83
241 TestPreload 450.35
242 TestScheduledStopWindows 322.66
247 TestRunningBinaryUpgrade 1046.52
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.38
254 TestStoppedBinaryUpgrade/Setup 0.79
255 TestStoppedBinaryUpgrade/Upgrade 837.53
276 TestStoppedBinaryUpgrade/MinikubeLogs 8.83
x
+
TestDownloadOnly/v1.20.0/json-events (17.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-366800 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-366800 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (17.8970993s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (17.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-366800
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-366800: exit status 85 (294.9992ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-366800 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:03 UTC |          |
	|         | -p download-only-366800        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 11:03:38
	Running on machine: minikube3
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 11:03:38.889236   13840 out.go:291] Setting OutFile to fd 636 ...
	I0318 11:03:38.890277   13840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:03:38.890277   13840 out.go:304] Setting ErrFile to fd 640...
	I0318 11:03:38.890277   13840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 11:03:38.903060   13840 root.go:314] Error reading config file at C:\Users\jenkins.minikube3\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube3\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0318 11:03:38.914366   13840 out.go:298] Setting JSON to true
	I0318 11:03:38.917008   13840 start.go:129] hostinfo: {"hostname":"minikube3","uptime":308395,"bootTime":1710451422,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0318 11:03:38.917008   13840 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 11:03:38.924253   13840 out.go:97] [download-only-366800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0318 11:03:38.924253   13840 notify.go:220] Checking for updates...
	W0318 11:03:38.924253   13840 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0318 11:03:38.926499   13840 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 11:03:38.930045   13840 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0318 11:03:38.933527   13840 out.go:169] MINIKUBE_LOCATION=18429
	I0318 11:03:38.936127   13840 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0318 11:03:38.941884   13840 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 11:03:38.942999   13840 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 11:03:44.415711   13840 out.go:97] Using the hyperv driver based on user configuration
	I0318 11:03:44.415841   13840 start.go:297] selected driver: hyperv
	I0318 11:03:44.415841   13840 start.go:901] validating driver "hyperv" against <nil>
	I0318 11:03:44.416327   13840 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 11:03:44.483793   13840 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0318 11:03:44.485268   13840 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 11:03:44.485268   13840 cni.go:84] Creating CNI manager for ""
	I0318 11:03:44.485268   13840 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 11:03:44.485268   13840 start.go:340] cluster config:
	{Name:download-only-366800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-366800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:03:44.486398   13840 iso.go:125] acquiring lock: {Name:mkefa7a887b9de67c22e0418da52288a5b488924 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 11:03:44.491018   13840 out.go:97] Downloading VM boot image ...
	I0318 11:03:44.491018   13840 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 11:03:48.991725   13840 out.go:97] Starting "download-only-366800" primary control-plane node in "download-only-366800" cluster
	I0318 11:03:48.995429   13840 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 11:03:49.041988   13840 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0318 11:03:49.042080   13840 cache.go:56] Caching tarball of preloaded images
	I0318 11:03:49.042581   13840 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 11:03:49.045537   13840 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0318 11:03:49.045676   13840 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0318 11:03:49.111034   13840 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0318 11:03:53.562591   13840 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0318 11:03:53.563287   13840 preload.go:255] verifying checksum of C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-366800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-366800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:03:56.788772   12952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2354393s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-366800
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-366800: (1.3658636s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (13.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-878600 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-878600 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv: (13.8742331s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (13.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-878600
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-878600: exit status 85 (459.3878ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-366800 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:03 UTC |                     |
	|         | -p download-only-366800        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:03 UTC | 18 Mar 24 11:03 UTC |
	| delete  | -p download-only-366800        | download-only-366800 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:03 UTC | 18 Mar 24 11:03 UTC |
	| start   | -o=json --download-only        | download-only-878600 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:03 UTC |                     |
	|         | -p download-only-878600        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 11:03:59
	Running on machine: minikube3
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 11:03:59.763269    3584 out.go:291] Setting OutFile to fd 608 ...
	I0318 11:03:59.764259    3584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:03:59.764259    3584 out.go:304] Setting ErrFile to fd 696...
	I0318 11:03:59.764259    3584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:03:59.790110    3584 out.go:298] Setting JSON to true
	I0318 11:03:59.792898    3584 start.go:129] hostinfo: {"hostname":"minikube3","uptime":308416,"bootTime":1710451422,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0318 11:03:59.793431    3584 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 11:03:59.798607    3584 out.go:97] [download-only-878600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0318 11:03:59.799281    3584 notify.go:220] Checking for updates...
	I0318 11:03:59.801566    3584 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 11:03:59.804228    3584 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0318 11:03:59.806536    3584 out.go:169] MINIKUBE_LOCATION=18429
	I0318 11:03:59.809534    3584 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0318 11:03:59.814535    3584 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 11:03:59.815549    3584 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 11:04:05.269380    3584 out.go:97] Using the hyperv driver based on user configuration
	I0318 11:04:05.269380    3584 start.go:297] selected driver: hyperv
	I0318 11:04:05.269380    3584 start.go:901] validating driver "hyperv" against <nil>
	I0318 11:04:05.269380    3584 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 11:04:05.324671    3584 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0318 11:04:05.326889    3584 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 11:04:05.326889    3584 cni.go:84] Creating CNI manager for ""
	I0318 11:04:05.326889    3584 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 11:04:05.326889    3584 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 11:04:05.326889    3584 start.go:340] cluster config:
	{Name:download-only-878600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-878600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 11:04:05.327758    3584 iso.go:125] acquiring lock: {Name:mkefa7a887b9de67c22e0418da52288a5b488924 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 11:04:05.331347    3584 out.go:97] Starting "download-only-878600" primary control-plane node in "download-only-878600" cluster
	I0318 11:04:05.331347    3584 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:04:05.376256    3584 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0318 11:04:05.376759    3584 cache.go:56] Caching tarball of preloaded images
	I0318 11:04:05.376848    3584 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 11:04:05.380284    3584 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0318 11:04:05.380284    3584 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0318 11:04:05.447389    3584 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-878600 host does not exist
	  To start a cluster, run: "minikube start -p download-only-878600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:04:13.558305    4704 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (1.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3382619s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (1.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (1.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-878600
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-878600: (1.1561467s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (1.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (13.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-330700 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-330700 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv: (13.0353707s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (13.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-330700
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-330700: exit status 85 (284.9415ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-366800 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:03 UTC |                     |
	|         | -p download-only-366800           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:03 UTC | 18 Mar 24 11:03 UTC |
	| delete  | -p download-only-366800           | download-only-366800 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:03 UTC | 18 Mar 24 11:03 UTC |
	| start   | -o=json --download-only           | download-only-878600 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:03 UTC |                     |
	|         | -p download-only-878600           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:04 UTC | 18 Mar 24 11:04 UTC |
	| delete  | -p download-only-878600           | download-only-878600 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:04 UTC | 18 Mar 24 11:04 UTC |
	| start   | -o=json --download-only           | download-only-330700 | minikube3\jenkins | v1.32.0 | 18 Mar 24 11:04 UTC |                     |
	|         | -p download-only-330700           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 11:04:16
	Running on machine: minikube3
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 11:04:16.589857    7556 out.go:291] Setting OutFile to fd 788 ...
	I0318 11:04:16.590883    7556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:04:16.590883    7556 out.go:304] Setting ErrFile to fd 784...
	I0318 11:04:16.590883    7556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:04:16.621906    7556 out.go:298] Setting JSON to true
	I0318 11:04:16.626075    7556 start.go:129] hostinfo: {"hostname":"minikube3","uptime":308433,"bootTime":1710451422,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0318 11:04:16.626075    7556 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 11:04:16.796825    7556 out.go:97] [download-only-330700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0318 11:04:16.797742    7556 notify.go:220] Checking for updates...
	I0318 11:04:16.800674    7556 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 11:04:16.803180    7556 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0318 11:04:16.805501    7556 out.go:169] MINIKUBE_LOCATION=18429
	I0318 11:04:16.808598    7556 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0318 11:04:16.813371    7556 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 11:04:16.813371    7556 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 11:04:22.429349    7556 out.go:97] Using the hyperv driver based on user configuration
	I0318 11:04:22.429871    7556 start.go:297] selected driver: hyperv
	I0318 11:04:22.429871    7556 start.go:901] validating driver "hyperv" against <nil>
	I0318 11:04:22.430168    7556 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 11:04:22.482928    7556 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0318 11:04:22.483648    7556 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 11:04:22.483648    7556 cni.go:84] Creating CNI manager for ""
	I0318 11:04:22.483648    7556 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 11:04:22.484194    7556 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 11:04:22.484360    7556 start.go:340] cluster config:
	{Name:download-only-330700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-330700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterva
l:1m0s}
	I0318 11:04:22.484474    7556 iso.go:125] acquiring lock: {Name:mkefa7a887b9de67c22e0418da52288a5b488924 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 11:04:22.487646    7556 out.go:97] Starting "download-only-330700" primary control-plane node in "download-only-330700" cluster
	I0318 11:04:22.487646    7556 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 11:04:22.540452    7556 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0318 11:04:22.540452    7556 cache.go:56] Caching tarball of preloaded images
	I0318 11:04:22.541032    7556 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 11:04:22.544377    7556 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0318 11:04:22.544504    7556 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0318 11:04:22.628712    7556 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:47acda482c3add5b56147c92b8d7f468 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-330700 host does not exist
	  To start a cluster, run: "minikube start -p download-only-330700"

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:04:29.552498   10708 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (1.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3471687s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (1.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (1.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-330700
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-330700: (1.3226229s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (1.32s)

                                                
                                    
x
+
TestBinaryMirror (7.1s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-991800 --alsologtostderr --binary-mirror http://127.0.0.1:50175 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-991800 --alsologtostderr --binary-mirror http://127.0.0.1:50175 --driver=hyperv: (6.2211892s)
helpers_test.go:175: Cleaning up "binary-mirror-991800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-991800
--- PASS: TestBinaryMirror (7.10s)

                                                
                                    
x
+
TestOffline (418.87s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-692300 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-692300 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (6m16.1459723s)
helpers_test.go:175: Cleaning up "offline-docker-692300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-692300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-692300: (42.7233276s)
--- PASS: TestOffline (418.87s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.31s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-209500
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-209500: exit status 85 (308.3611ms)

                                                
                                                
-- stdout --
	* Profile "addons-209500" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-209500"

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:04:43.764873    6456 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.31s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.31s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-209500
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-209500: exit status 85 (306.3297ms)

                                                
                                                
-- stdout --
	* Profile "addons-209500" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-209500"

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:04:43.762761    6884 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.31s)

                                                
                                    
x
+
TestAddons/Setup (389.05s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-209500 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-209500 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m29.0529802s)
--- PASS: TestAddons/Setup (389.05s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (85.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-209500 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-209500 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-209500 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4bf65263-21c6-4fa5-b751-669bf768a1b5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4bf65263-21c6-4fa5-b751-669bf768a1b5] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 27.0202242s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-209500 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-209500 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.7861039s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-209500 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0318 11:12:09.761923   13332 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context addons-209500 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-209500 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-209500 ip: (2.7619617s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.30.141.150
addons_test.go:297: (dbg) Done: nslookup hello-john.test 172.30.141.150: (1.3640408s)
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-209500 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p addons-209500 addons disable ingress-dns --alsologtostderr -v=1: (16.7206432s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-209500 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p addons-209500 addons disable ingress --alsologtostderr -v=1: (23.6383947s)
--- PASS: TestAddons/parallel/Ingress (85.04s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (27.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2h86p" [4b346270-6ab9-491b-9c70-fe7c824d70ee] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0137504s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-209500
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-209500: (21.828024s)
--- PASS: TestAddons/parallel/InspektorGadget (27.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (22.17s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 7.0155ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-6sxxw" [35fa537b-dc96-4fe5-9baf-a6f62bdbf57e] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0217607s
addons_test.go:415: (dbg) Run:  kubectl --context addons-209500 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-209500 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-209500 addons disable metrics-server --alsologtostderr -v=1: (16.8857491s)
--- PASS: TestAddons/parallel/MetricsServer (22.17s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (30.05s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 28.6894ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-6b9sr" [60cda2b1-a5f7-4ce8-b9ef-0ed5074d1788] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0131344s
addons_test.go:473: (dbg) Run:  kubectl --context addons-209500 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-209500 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.0658471s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-209500 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-209500 addons disable helm-tiller --alsologtostderr -v=1: (16.9165301s)
--- PASS: TestAddons/parallel/HelmTiller (30.05s)

                                                
                                    
x
+
TestAddons/parallel/CSI (106.03s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 22.5686ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-209500 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-209500 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d7b2728f-eeb8-489c-81fa-8c369278d84f] Pending
helpers_test.go:344: "task-pv-pod" [d7b2728f-eeb8-489c-81fa-8c369278d84f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d7b2728f-eeb8-489c-81fa-8c369278d84f] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 31.0287485s
addons_test.go:584: (dbg) Run:  kubectl --context addons-209500 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-209500 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-209500 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-209500 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-209500 delete pod task-pv-pod: (1.5464526s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-209500 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-209500 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-209500 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f0021e51-6b44-46bc-9c83-ee1dafae63b0] Pending
helpers_test.go:344: "task-pv-pod-restore" [f0021e51-6b44-46bc-9c83-ee1dafae63b0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f0021e51-6b44-46bc-9c83-ee1dafae63b0] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.0192669s
addons_test.go:626: (dbg) Run:  kubectl --context addons-209500 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-209500 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-209500 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-209500 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-209500 addons disable csi-hostpath-driver --alsologtostderr -v=1: (23.1965465s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-209500 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-209500 addons disable volumesnapshots --alsologtostderr -v=1: (14.5727115s)
--- PASS: TestAddons/parallel/CSI (106.03s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (51.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-209500 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-209500 --alsologtostderr -v=1: (17.8193395s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-h2gvx" [26af3c85-5a14-477a-9695-ef30eb9b6d5e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-h2gvx" [26af3c85-5a14-477a-9695-ef30eb9b6d5e] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 34.0107697s
--- PASS: TestAddons/parallel/Headlamp (51.84s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (22.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-tm9bk" [8994c678-80c0-416b-879b-773abcce3154] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0213908s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-209500
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-209500: (16.3514991s)
--- PASS: TestAddons/parallel/CloudSpanner (22.40s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (85.15s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-209500 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-209500 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-209500 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a1028aee-d6d0-4266-8681-615ea6e10b55] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a1028aee-d6d0-4266-8681-615ea6e10b55] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a1028aee-d6d0-4266-8681-615ea6e10b55] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0085911s
addons_test.go:891: (dbg) Run:  kubectl --context addons-209500 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-209500 ssh "cat /opt/local-path-provisioner/pvc-35fb0d8b-c759-4573-bd47-2d1a449be1a0_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-209500 ssh "cat /opt/local-path-provisioner/pvc-35fb0d8b-c759-4573-bd47-2d1a449be1a0_default_test-pvc/file1": (10.2767001s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-209500 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-209500 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-209500 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-209500 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m0.9453253s)
--- PASS: TestAddons/parallel/LocalPath (85.15s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (21.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-kb5lj" [7112072c-03be-4ff9-831f-683cd30e9414] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0157936s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-209500
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-209500: (15.5229435s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (21.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-stbs5" [446bc3f3-84e0-4efd-95c3-5c10a5a09df6] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0210044s
--- PASS: TestAddons/parallel/Yakd (5.03s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-209500 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-209500 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.33s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (52.61s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-209500
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-209500: (39.6833929s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-209500
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-209500: (5.517309s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-209500
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-209500: (4.6930603s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-209500
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-209500: (2.6883823s)
--- PASS: TestAddons/StoppedEnableDisable (52.61s)

                                                
                                    
x
+
TestCertOptions (549.53s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-176800 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
E0318 13:51:13.091070   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-176800 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (8m0.0833231s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-176800 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-176800 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (10.4114592s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-176800 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-176800 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-176800 -- "sudo cat /etc/kubernetes/admin.conf": (9.9720115s)
helpers_test.go:175: Cleaning up "cert-options-176800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-176800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-176800: (48.8811266s)
--- PASS: TestCertOptions (549.53s)

                                                
                                    
x
+
TestCertExpiration (870.28s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-180400 --memory=2048 --cert-expiration=3m --driver=hyperv
E0318 13:46:13.086810   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-180400 --memory=2048 --cert-expiration=3m --driver=hyperv: (6m7.2209323s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-180400 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-180400 --memory=2048 --cert-expiration=8760h --driver=hyperv: (4m39.5406046s)
helpers_test.go:175: Cleaning up "cert-expiration-180400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-180400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-180400: (43.5000569s)
--- PASS: TestCertExpiration (870.28s)

                                                
                                    
x
+
TestDockerFlags (427.19s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-688500 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-688500 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (6m6.1463976s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-688500 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-688500 ssh "sudo systemctl show docker --property=Environment --no-pager": (9.533191s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-688500 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-688500 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (9.6799216s)
helpers_test.go:175: Cleaning up "docker-flags-688500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-688500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-688500: (41.8176845s)
--- PASS: TestDockerFlags (427.19s)

                                                
                                    
x
+
TestForceSystemdFlag (502.8s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-415500 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-415500 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (7m12.0682859s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-415500 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-415500 ssh "docker info --format {{.CgroupDriver}}": (10.3511603s)
helpers_test.go:175: Cleaning up "force-systemd-flag-415500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-415500
E0318 13:45:42.120400   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 13:45:56.322738   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-415500: (1m0.3762417s)
--- PASS: TestForceSystemdFlag (502.80s)

                                                
                                    
x
+
TestForceSystemdEnv (438.04s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-838900 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-838900 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (6m11.3247228s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-838900 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-838900 ssh "docker info --format {{.CgroupDriver}}": (9.425972s)
helpers_test.go:175: Cleaning up "force-systemd-env-838900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-838900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-838900: (57.2832313s)
--- PASS: TestForceSystemdEnv (438.04s)

                                                
                                    
x
+
TestErrorSpam/start (16.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 start --dry-run: (5.5495082s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 start --dry-run: (5.5830623s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 start --dry-run: (5.629955s)
--- PASS: TestErrorSpam/start (16.79s)

                                                
                                    
x
+
TestErrorSpam/status (35.38s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 status: exit status 6 (12.1456677s)

                                                
                                                
-- stdout --
	nospam-852500
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:19:56.263611    6752 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0318 11:20:08.227415    6752 status.go:417] kubeconfig endpoint: get endpoint: "nospam-852500" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\\Users\\jenkins.minikube3\\AppData\\Local\\Temp\\nospam-852500 status" failed: exit status 6
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 status: exit status 6 (11.5269301s)

                                                
                                                
-- stdout --
	nospam-852500
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:20:08.388507   10112 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0318 11:20:19.757136   10112 status.go:417] kubeconfig endpoint: get endpoint: "nospam-852500" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\\Users\\jenkins.minikube3\\AppData\\Local\\Temp\\nospam-852500 status" failed: exit status 6
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 status: exit status 6 (11.6806589s)

                                                
                                                
-- stdout --
	nospam-852500
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:20:19.928023   13728 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0318 11:20:31.449721   13728 status.go:417] kubeconfig endpoint: get endpoint: "nospam-852500" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\\Users\\jenkins.minikube3\\AppData\\Local\\Temp\\nospam-852500 status" failed: exit status 6
--- PASS: TestErrorSpam/status (35.38s)

                                                
                                    
x
+
TestErrorSpam/pause (26.29s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 pause: exit status 80 (8.7331147s)

                                                
                                                
-- stdout --
	* Pausing node nospam-852500 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:20:31.624526   10352 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_delete_d33583e4620696992d54ac91e3e1b797d333b62f_20.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\\Users\\jenkins.minikube3\\AppData\\Local\\Temp\\nospam-852500 pause" failed: exit status 80
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 pause: exit status 80 (8.8935847s)

                                                
                                                
-- stdout --
	* Pausing node nospam-852500 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:20:40.362213   13120 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_delete_d33583e4620696992d54ac91e3e1b797d333b62f_20.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\\Users\\jenkins.minikube3\\AppData\\Local\\Temp\\nospam-852500 pause" failed: exit status 80
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 pause: exit status 80 (8.644458s)

                                                
                                                
-- stdout --
	* Pausing node nospam-852500 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:20:49.255444   14224 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_delete_d33583e4620696992d54ac91e3e1b797d333b62f_20.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\\Users\\jenkins.minikube3\\AppData\\Local\\Temp\\nospam-852500 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (26.29s)

                                                
                                    
x
+
TestErrorSpam/unpause (162.71s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 unpause
E0318 11:21:13.029484   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 unpause: exit status 80 (42.1776554s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-852500 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:20:57.908318    8624 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format=<no value>: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_delete_d33583e4620696992d54ac91e3e1b797d333b62f_20.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\\Users\\jenkins.minikube3\\AppData\\Local\\Temp\\nospam-852500 unpause" failed: exit status 80
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 unpause
E0318 11:21:40.827472   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 unpause: exit status 80 (1m0.2796186s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-852500 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:21:40.103639    4232 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format=<no value>: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_delete_d33583e4620696992d54ac91e3e1b797d333b62f_20.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\\Users\\jenkins.minikube3\\AppData\\Local\\Temp\\nospam-852500 unpause" failed: exit status 80
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 unpause: exit status 80 (1m0.2288604s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-852500 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:22:40.382491    3784 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format=<no value>: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_delete_d33583e4620696992d54ac91e3e1b797d333b62f_20.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\\Users\\jenkins.minikube3\\AppData\\Local\\Temp\\nospam-852500 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (162.71s)

                                                
                                    
x
+
TestErrorSpam/stop (88.87s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 stop: (1m8.1200578s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 stop: (10.3617908s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-852500 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-852500 stop: (10.3733616s)
--- PASS: TestErrorSpam/stop (88.87s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\13424\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (203.31s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-611000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0318 11:26:13.025386   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-611000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m23.3023977s)
--- PASS: TestFunctional/serial/StartWithProxy (203.31s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (123.2s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-611000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-611000 --alsologtostderr -v=8: (2m3.1934965s)
functional_test.go:659: soft start took 2m3.1951246s for "functional-611000" cluster.
--- PASS: TestFunctional/serial/SoftStart (123.20s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-611000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (24.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 cache add registry.k8s.io/pause:3.1: (8.4999359s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 cache add registry.k8s.io/pause:3.3: (8.0817832s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 cache add registry.k8s.io/pause:latest
E0318 11:31:13.040636   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 cache add registry.k8s.io/pause:latest: (8.1730774s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (24.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (10.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-611000 C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local4279396127\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-611000 C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local4279396127\001: (2.1512849s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 cache add minikube-local-cache-test:functional-611000
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 cache add minikube-local-cache-test:functional-611000: (7.4289489s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 cache delete minikube-local-cache-test:functional-611000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-611000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (10.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 ssh sudo crictl images: (8.6582002s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (33.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 ssh sudo docker rmi registry.k8s.io/pause:latest: (8.5316773s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-611000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (8.5041436s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:31:45.121801   10916 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 cache reload: (7.8457764s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (8.9023161s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (33.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.52s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 kubectl -- --context functional-611000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.43s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (118.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-611000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-611000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m58.0819221s)
functional_test.go:757: restart took 1m58.0938648s for "functional-611000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (118.09s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-611000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (7.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 logs: (7.8392709s)
--- PASS: TestFunctional/serial/LogsCmd (7.85s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (9.8s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 logs --file C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialLogsFileCmd505035258\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 logs --file C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialLogsFileCmd505035258\001\logs.txt: (9.7897947s)
--- PASS: TestFunctional/serial/LogsFileCmd (9.80s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (19.49s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-611000 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-611000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-611000: exit status 115 (15.3501198s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://172.30.129.196:30381 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:35:02.192332    1272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_service_5a553248039ac2ab6beea740c8d8ce1b809666c7_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-611000 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (19.49s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (41.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 status: (13.1769492s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (13.8680446s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 status -o json: (14.2410033s)
--- PASS: TestFunctional/parallel/StatusCmd (41.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (44.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-611000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-611000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-clvrh" [2f57151d-2dcf-4741-859e-5968575d8a31] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-clvrh" [2f57151d-2dcf-4741-859e-5968575d8a31] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 25.0225633s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 service hello-node-connect --url: (18.7581732s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.30.129.196:30441
functional_test.go:1671: http://172.30.129.196:30441: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-clvrh

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.30.129.196:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.30.129.196:30441
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (44.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ea9b5188-5461-42f3-af2a-1c0652841cdd] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0311074s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-611000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-611000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-611000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-611000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f60958e7-6825-45a5-abd4-b29685c62632] Pending
helpers_test.go:344: "sp-pod" [f60958e7-6825-45a5-abd4-b29685c62632] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f60958e7-6825-45a5-abd4-b29685c62632] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.0275707s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-611000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-611000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-611000 delete -f testdata/storage-provisioner/pod.yaml: (1.6829097s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-611000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [06309292-e8c7-4afc-a920-487041a93794] Pending
helpers_test.go:344: "sp-pod" [06309292-e8c7-4afc-a920-487041a93794] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [06309292-e8c7-4afc-a920-487041a93794] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.0205346s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-611000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.08s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (19.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 ssh "echo hello": (10.097395s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 ssh "cat /etc/hostname": (9.8826102s)
--- PASS: TestFunctional/parallel/SSHCmd (19.99s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (58.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 cp testdata\cp-test.txt /home/docker/cp-test.txt: (9.0437266s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 ssh -n functional-611000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 ssh -n functional-611000 "sudo cat /home/docker/cp-test.txt": (10.1426019s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 cp functional-611000:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalparallelCpCmd3744354271\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 cp functional-611000:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalparallelCpCmd3744354271\001\cp-test.txt: (10.0953328s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 ssh -n functional-611000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 ssh -n functional-611000 "sudo cat /home/docker/cp-test.txt": (9.9645643s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.5277413s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 ssh -n functional-611000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 ssh -n functional-611000 "sudo cat /tmp/does/not/exist/cp-test.txt": (10.6264672s)
--- PASS: TestFunctional/parallel/CpCmd (58.41s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (68.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-611000 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-2h9gs" [0f59995e-4e0a-4464-a706-08150aee5cb5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-2h9gs" [0f59995e-4e0a-4464-a706-08150aee5cb5] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 51.016648s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-611000 exec mysql-859648c796-2h9gs -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-611000 exec mysql-859648c796-2h9gs -- mysql -ppassword -e "show databases;": exit status 1 (380.0879ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-611000 exec mysql-859648c796-2h9gs -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-611000 exec mysql-859648c796-2h9gs -- mysql -ppassword -e "show databases;": exit status 1 (429.9289ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-611000 exec mysql-859648c796-2h9gs -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-611000 exec mysql-859648c796-2h9gs -- mysql -ppassword -e "show databases;": exit status 1 (394.2675ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-611000 exec mysql-859648c796-2h9gs -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-611000 exec mysql-859648c796-2h9gs -- mysql -ppassword -e "show databases;": exit status 1 (405.1401ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-611000 exec mysql-859648c796-2h9gs -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-611000 exec mysql-859648c796-2h9gs -- mysql -ppassword -e "show databases;": exit status 1 (324.7068ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-611000 exec mysql-859648c796-2h9gs -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (68.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (10.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13424/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 ssh "sudo cat /etc/test/nested/copy/13424/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 ssh "sudo cat /etc/test/nested/copy/13424/hosts": (10.8520076s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (10.86s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (65.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13424.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 ssh "sudo cat /etc/ssl/certs/13424.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 ssh "sudo cat /etc/ssl/certs/13424.pem": (12.1507377s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13424.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 ssh "sudo cat /usr/share/ca-certificates/13424.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 ssh "sudo cat /usr/share/ca-certificates/13424.pem": (10.539741s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 ssh "sudo cat /etc/ssl/certs/51391683.0": (11.3840398s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/134242.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 ssh "sudo cat /etc/ssl/certs/134242.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 ssh "sudo cat /etc/ssl/certs/134242.pem": (9.9233082s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/134242.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 ssh "sudo cat /usr/share/ca-certificates/134242.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 ssh "sudo cat /usr/share/ca-certificates/134242.pem": (11.0430347s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (10.2205723s)
--- PASS: TestFunctional/parallel/CertSync (65.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-611000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (12.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-611000 ssh "sudo systemctl is-active crio": exit status 1 (12.4821475s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:35:18.329213    9352 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (12.49s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.7517192s)
--- PASS: TestFunctional/parallel/License (3.77s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (9.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 version -o=json --components: (9.5400478s)
--- PASS: TestFunctional/parallel/Version/components (9.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 image ls --format short --alsologtostderr: (7.7186775s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-611000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-611000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-611000
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-611000 image ls --format short --alsologtostderr:
W0318 11:38:15.669881    5060 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0318 11:38:15.763341    5060 out.go:291] Setting OutFile to fd 840 ...
I0318 11:38:15.764064    5060 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 11:38:15.764064    5060 out.go:304] Setting ErrFile to fd 912...
I0318 11:38:15.764064    5060 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 11:38:15.784398    5060 config.go:182] Loaded profile config "functional-611000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 11:38:15.785124    5060 config.go:182] Loaded profile config "functional-611000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 11:38:15.785968    5060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
I0318 11:38:18.171927    5060 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 11:38:18.171927    5060 main.go:141] libmachine: [stderr =====>] : 
I0318 11:38:18.186664    5060 ssh_runner.go:195] Run: systemctl --version
I0318 11:38:18.186664    5060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
I0318 11:38:20.383567    5060 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 11:38:20.393421    5060 main.go:141] libmachine: [stderr =====>] : 
I0318 11:38:20.393600    5060 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
I0318 11:38:23.078308    5060 main.go:141] libmachine: [stdout =====>] : 172.30.129.196

                                                
                                                
I0318 11:38:23.078308    5060 main.go:141] libmachine: [stderr =====>] : 
I0318 11:38:23.078605    5060 sshutil.go:53] new ssh client: &{IP:172.30.129.196 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-611000\id_rsa Username:docker}
I0318 11:38:23.188934    5060 ssh_runner.go:235] Completed: systemctl --version: (5.0022323s)
I0318 11:38:23.198675    5060 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 image ls --format table --alsologtostderr: (7.4128774s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-611000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | alpine            | e289a478ace02 | 42.6MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/minikube-local-cache-test | functional-611000 | fab20e3fe6f18 | 30B    |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| gcr.io/google-containers/addon-resizer      | functional-611000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/nginx                     | latest            | 92b11f67642b6 | 187MB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-611000 image ls --format table --alsologtostderr:
W0318 11:38:32.914316    5172 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0318 11:38:33.004927    5172 out.go:291] Setting OutFile to fd 504 ...
I0318 11:38:33.005649    5172 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 11:38:33.005649    5172 out.go:304] Setting ErrFile to fd 992...
I0318 11:38:33.005649    5172 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 11:38:33.024201    5172 config.go:182] Loaded profile config "functional-611000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 11:38:33.027940    5172 config.go:182] Loaded profile config "functional-611000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 11:38:33.028718    5172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
I0318 11:38:35.275972    5172 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 11:38:35.276140    5172 main.go:141] libmachine: [stderr =====>] : 
I0318 11:38:35.296330    5172 ssh_runner.go:195] Run: systemctl --version
I0318 11:38:35.296330    5172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
I0318 11:38:37.483613    5172 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 11:38:37.483613    5172 main.go:141] libmachine: [stderr =====>] : 
I0318 11:38:37.483613    5172 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
I0318 11:38:40.022989    5172 main.go:141] libmachine: [stdout =====>] : 172.30.129.196

                                                
                                                
I0318 11:38:40.022989    5172 main.go:141] libmachine: [stderr =====>] : 
I0318 11:38:40.022989    5172 sshutil.go:53] new ssh client: &{IP:172.30.129.196 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-611000\id_rsa Username:docker}
I0318 11:38:40.135709    5172 ssh_runner.go:235] Completed: systemctl --version: (4.8393428s)
I0318 11:38:40.145607    5172 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 image ls --format json --alsologtostderr: (7.4984158s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-611000 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924
b9312e","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-611000"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"fab20e3fe6f189a0b2c4261e9495e4c960f858c9ed71f4b610317e215895179c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-ca
che-test:functional-611000"],"size":"30"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-611000 image ls --format json --alsologtostderr:
W0318 11:38:25.417484    2284 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0318 11:38:25.507662    2284 out.go:291] Setting OutFile to fd 876 ...
I0318 11:38:25.508665    2284 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 11:38:25.508753    2284 out.go:304] Setting ErrFile to fd 840...
I0318 11:38:25.508877    2284 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 11:38:25.536278    2284 config.go:182] Loaded profile config "functional-611000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 11:38:25.536278    2284 config.go:182] Loaded profile config "functional-611000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 11:38:25.537368    2284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
I0318 11:38:27.724152    2284 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 11:38:27.724152    2284 main.go:141] libmachine: [stderr =====>] : 
I0318 11:38:27.740916    2284 ssh_runner.go:195] Run: systemctl --version
I0318 11:38:27.740916    2284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
I0318 11:38:29.900923    2284 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 11:38:29.911360    2284 main.go:141] libmachine: [stderr =====>] : 
I0318 11:38:29.911420    2284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
I0318 11:38:32.595948    2284 main.go:141] libmachine: [stdout =====>] : 172.30.129.196

                                                
                                                
I0318 11:38:32.605329    2284 main.go:141] libmachine: [stderr =====>] : 
I0318 11:38:32.605526    2284 sshutil.go:53] new ssh client: &{IP:172.30.129.196 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-611000\id_rsa Username:docker}
I0318 11:38:32.715394    2284 ssh_runner.go:235] Completed: systemctl --version: (4.9743437s)
I0318 11:38:32.725619    2284 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 image ls --format yaml --alsologtostderr: (7.453555s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-611000 image ls --format yaml --alsologtostderr:
- id: 92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: fab20e3fe6f189a0b2c4261e9495e4c960f858c9ed71f4b610317e215895179c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-611000
size: "30"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-611000
size: "32900000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-611000 image ls --format yaml --alsologtostderr:
W0318 11:38:17.964096   11160 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0318 11:38:18.062929   11160 out.go:291] Setting OutFile to fd 868 ...
I0318 11:38:18.077117   11160 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 11:38:18.077117   11160 out.go:304] Setting ErrFile to fd 708...
I0318 11:38:18.077117   11160 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 11:38:18.093283   11160 config.go:182] Loaded profile config "functional-611000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 11:38:18.093875   11160 config.go:182] Loaded profile config "functional-611000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 11:38:18.094635   11160 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
I0318 11:38:20.277423   11160 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 11:38:20.277423   11160 main.go:141] libmachine: [stderr =====>] : 
I0318 11:38:20.289987   11160 ssh_runner.go:195] Run: systemctl --version
I0318 11:38:20.289987   11160 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
I0318 11:38:22.480202   11160 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 11:38:22.480438   11160 main.go:141] libmachine: [stderr =====>] : 
I0318 11:38:22.480525   11160 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
I0318 11:38:25.097079   11160 main.go:141] libmachine: [stdout =====>] : 172.30.129.196

                                                
                                                
I0318 11:38:25.097079   11160 main.go:141] libmachine: [stderr =====>] : 
I0318 11:38:25.109819   11160 sshutil.go:53] new ssh client: &{IP:172.30.129.196 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-611000\id_rsa Username:docker}
I0318 11:38:25.213574   11160 ssh_runner.go:235] Completed: systemctl --version: (4.9235504s)
I0318 11:38:25.224220   11160 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (26.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-611000 ssh pgrep buildkitd: exit status 1 (9.7738676s)

                                                
                                                
** stderr ** 
	W0318 11:38:23.387644    5716 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 image build -t localhost/my-image:functional-611000 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 image build -t localhost/my-image:functional-611000 testdata\build --alsologtostderr: (9.4380958s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-611000 image build -t localhost/my-image:functional-611000 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 70afaab2636b
---> Removed intermediate container 70afaab2636b
---> 38d055d2c04e
Step 3/3 : ADD content.txt /
---> 3adb9487b454
Successfully built 3adb9487b454
Successfully tagged localhost/my-image:functional-611000
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-611000 image build -t localhost/my-image:functional-611000 testdata\build --alsologtostderr:
W0318 11:38:33.158752     200 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0318 11:38:33.260154     200 out.go:291] Setting OutFile to fd 1008 ...
I0318 11:38:33.278906     200 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 11:38:33.278961     200 out.go:304] Setting ErrFile to fd 732...
I0318 11:38:33.278961     200 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 11:38:33.297680     200 config.go:182] Loaded profile config "functional-611000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 11:38:33.311032     200 config.go:182] Loaded profile config "functional-611000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 11:38:33.314152     200 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
I0318 11:38:35.477869     200 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 11:38:35.477869     200 main.go:141] libmachine: [stderr =====>] : 
I0318 11:38:35.490513     200 ssh_runner.go:195] Run: systemctl --version
I0318 11:38:35.490710     200 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-611000 ).state
I0318 11:38:37.630304     200 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0318 11:38:37.630304     200 main.go:141] libmachine: [stderr =====>] : 
I0318 11:38:37.630427     200 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-611000 ).networkadapters[0]).ipaddresses[0]
I0318 11:38:40.257770     200 main.go:141] libmachine: [stdout =====>] : 172.30.129.196

                                                
                                                
I0318 11:38:40.257770     200 main.go:141] libmachine: [stderr =====>] : 
I0318 11:38:40.257770     200 sshutil.go:53] new ssh client: &{IP:172.30.129.196 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-611000\id_rsa Username:docker}
I0318 11:38:40.355099     200 ssh_runner.go:235] Completed: systemctl --version: (4.8643526s)
I0318 11:38:40.355099     200 build_images.go:161] Building image from path: C:\Users\jenkins.minikube3\AppData\Local\Temp\build.4177196276.tar
I0318 11:38:40.367983     200 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0318 11:38:40.396704     200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4177196276.tar
I0318 11:38:40.403825     200 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4177196276.tar: stat -c "%s %y" /var/lib/minikube/build/build.4177196276.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4177196276.tar': No such file or directory
I0318 11:38:40.403994     200 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\AppData\Local\Temp\build.4177196276.tar --> /var/lib/minikube/build/build.4177196276.tar (3072 bytes)
I0318 11:38:40.460888     200 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4177196276
I0318 11:38:40.491796     200 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4177196276 -xf /var/lib/minikube/build/build.4177196276.tar
I0318 11:38:40.511063     200 docker.go:360] Building image: /var/lib/minikube/build/build.4177196276
I0318 11:38:40.521396     200 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-611000 /var/lib/minikube/build/build.4177196276
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0318 11:38:42.402229     200 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-611000 /var/lib/minikube/build/build.4177196276: (1.8807194s)
I0318 11:38:42.413712     200 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4177196276
I0318 11:38:42.444144     200 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4177196276.tar
I0318 11:38:42.461482     200 build_images.go:217] Built localhost/my-image:functional-611000 from C:\Users\jenkins.minikube3\AppData\Local\Temp\build.4177196276.tar
I0318 11:38:42.461542     200 build_images.go:133] succeeded building to: functional-611000
I0318 11:38:42.461625     200 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 image ls: (7.2012833s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (26.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.3277041s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-611000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.63s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (46.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-611000 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-611000"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-611000 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-611000": (31.2629814s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-611000 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-611000 docker-env | Invoke-Expression ; docker images": (15.338081s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (46.62s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 update-context --alsologtostderr -v=2: (2.6151811s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 update-context --alsologtostderr -v=2: (2.6818587s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 update-context --alsologtostderr -v=2: (2.5947158s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (25.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 image load --daemon gcr.io/google-containers/addon-resizer:functional-611000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 image load --daemon gcr.io/google-containers/addon-resizer:functional-611000 --alsologtostderr: (17.4363193s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 image ls: (8.2888528s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (25.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (20.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 image load --daemon gcr.io/google-containers/addon-resizer:functional-611000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 image load --daemon gcr.io/google-containers/addon-resizer:functional-611000 --alsologtostderr: (13.0702672s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 image ls: (7.8478048s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (20.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (26.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
E0318 11:36:13.026579   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.8045124s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-611000
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 image load --daemon gcr.io/google-containers/addon-resizer:functional-611000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 image load --daemon gcr.io/google-containers/addon-resizer:functional-611000 --alsologtostderr: (15.03684s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 image ls: (7.8236817s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (26.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 image save gcr.io/google-containers/addon-resizer:functional-611000 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 image save gcr.io/google-containers/addon-resizer:functional-611000 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (9.8862989s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (16.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 image rm gcr.io/google-containers/addon-resizer:functional-611000 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 image rm gcr.io/google-containers/addon-resizer:functional-611000 --alsologtostderr: (8.3699468s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 image ls: (8.524375s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (16.91s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-611000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-611000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-611000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-611000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 9924: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 11656: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-611000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (26.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-611000 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [16dced92-b375-4918-a735-9028feb67ae9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [16dced92-b375-4918-a735-9028feb67ae9] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 26.0191509s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (26.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (16.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (9.7551466s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 image ls: (7.0697156s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (16.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-611000
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 image save --daemon gcr.io/google-containers/addon-resizer:functional-611000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 image save --daemon gcr.io/google-containers/addon-resizer:functional-611000 --alsologtostderr: (8.1963468s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-611000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-611000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2880: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-611000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-611000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-vkptp" [7a35d9ec-0cf2-424e-b1f5-f2e08d998150] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-vkptp" [7a35d9ec-0cf2-424e-b1f5-f2e08d998150] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.0260169s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (10.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (10.2944702s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (10.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (13.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 service list: (13.8611955s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (13.86s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (11.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (10.7386742s)
functional_test.go:1311: Took "10.7453577s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "317.216ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (11.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (14.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-611000 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-611000 service list -o json: (14.6274216s)
functional_test.go:1490: Took "14.6274216s" to run "out/minikube-windows-amd64.exe -p functional-611000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (14.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (12.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (11.8584259s)
functional_test.go:1362: Took "11.8696235s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "322.2399ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (12.19s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.48s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-611000
--- PASS: TestFunctional/delete_addon-resizer_images (0.48s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-611000
--- PASS: TestFunctional/delete_my-image_image (0.18s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-611000
--- PASS: TestFunctional/delete_minikube_cached_images (0.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (680.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-747000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0318 11:45:42.068247   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 11:45:42.097364   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 11:45:42.127042   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 11:45:42.154094   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 11:45:42.198694   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 11:45:42.290764   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 11:45:42.460344   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 11:45:42.786381   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 11:45:43.428249   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 11:45:44.722601   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 11:45:47.284248   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 11:45:52.404603   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 11:46:02.657437   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 11:46:13.029238   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
E0318 11:46:23.147321   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 11:47:04.111833   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 11:48:26.037960   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 11:49:16.219331   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
E0318 11:50:42.068857   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 11:51:09.894655   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 11:51:13.037701   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-747000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (10m45.588732s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr: (34.754734s)
--- PASS: TestMultiControlPlane/serial/StartCluster (680.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (13.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-747000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-747000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-747000 -- rollout status deployment/busybox: (4.5273289s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-747000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-747000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-747000 -- exec busybox-5b5d89c9d6-bfx2x -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-747000 -- exec busybox-5b5d89c9d6-bfx2x -- nslookup kubernetes.io: (1.8011488s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-747000 -- exec busybox-5b5d89c9d6-ln6sd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-747000 -- exec busybox-5b5d89c9d6-ln6sd -- nslookup kubernetes.io: (1.5618924s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-747000 -- exec busybox-5b5d89c9d6-qvfgv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-747000 -- exec busybox-5b5d89c9d6-bfx2x -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-747000 -- exec busybox-5b5d89c9d6-ln6sd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-747000 -- exec busybox-5b5d89c9d6-qvfgv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-747000 -- exec busybox-5b5d89c9d6-bfx2x -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-747000 -- exec busybox-5b5d89c9d6-ln6sd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-747000 -- exec busybox-5b5d89c9d6-qvfgv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (13.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (239.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-747000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-747000 -v=7 --alsologtostderr: (3m13.0749871s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr
E0318 12:00:42.065891   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr: (46.0204104s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (239.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-747000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (27.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0318 12:01:13.039959   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (27.1566142s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (27.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (627.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 status --output json -v=7 --alsologtostderr: (45.8994975s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 cp testdata\cp-test.txt ha-747000:/home/docker/cp-test.txt
E0318 12:02:05.260626   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 cp testdata\cp-test.txt ha-747000:/home/docker/cp-test.txt: (9.222908s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000 "sudo cat /home/docker/cp-test.txt": (9.206982s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3888916617\001\cp-test_ha-747000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3888916617\001\cp-test_ha-747000.txt: (9.1446548s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000 "sudo cat /home/docker/cp-test.txt": (9.1593921s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000:/home/docker/cp-test.txt ha-747000-m02:/home/docker/cp-test_ha-747000_ha-747000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000:/home/docker/cp-test.txt ha-747000-m02:/home/docker/cp-test_ha-747000_ha-747000-m02.txt: (15.9224614s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000 "sudo cat /home/docker/cp-test.txt": (9.1043844s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m02 "sudo cat /home/docker/cp-test_ha-747000_ha-747000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m02 "sudo cat /home/docker/cp-test_ha-747000_ha-747000-m02.txt": (9.1138421s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000:/home/docker/cp-test.txt ha-747000-m03:/home/docker/cp-test_ha-747000_ha-747000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000:/home/docker/cp-test.txt ha-747000-m03:/home/docker/cp-test_ha-747000_ha-747000-m03.txt: (15.7775489s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000 "sudo cat /home/docker/cp-test.txt": (9.0304777s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m03 "sudo cat /home/docker/cp-test_ha-747000_ha-747000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m03 "sudo cat /home/docker/cp-test_ha-747000_ha-747000-m03.txt": (9.1640852s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000:/home/docker/cp-test.txt ha-747000-m04:/home/docker/cp-test_ha-747000_ha-747000-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000:/home/docker/cp-test.txt ha-747000-m04:/home/docker/cp-test_ha-747000_ha-747000-m04.txt: (15.7977665s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000 "sudo cat /home/docker/cp-test.txt": (9.0417017s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m04 "sudo cat /home/docker/cp-test_ha-747000_ha-747000-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m04 "sudo cat /home/docker/cp-test_ha-747000_ha-747000-m04.txt": (8.9879934s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 cp testdata\cp-test.txt ha-747000-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 cp testdata\cp-test.txt ha-747000-m02:/home/docker/cp-test.txt: (9.2375099s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m02 "sudo cat /home/docker/cp-test.txt": (9.6374434s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3888916617\001\cp-test_ha-747000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3888916617\001\cp-test_ha-747000-m02.txt: (10.0268759s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m02 "sudo cat /home/docker/cp-test.txt": (9.830907s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m02:/home/docker/cp-test.txt ha-747000:/home/docker/cp-test_ha-747000-m02_ha-747000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m02:/home/docker/cp-test.txt ha-747000:/home/docker/cp-test_ha-747000-m02_ha-747000.txt: (16.9333604s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m02 "sudo cat /home/docker/cp-test.txt": (9.6628403s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000 "sudo cat /home/docker/cp-test_ha-747000-m02_ha-747000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000 "sudo cat /home/docker/cp-test_ha-747000-m02_ha-747000.txt": (10.1773981s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m02:/home/docker/cp-test.txt ha-747000-m03:/home/docker/cp-test_ha-747000-m02_ha-747000-m03.txt
E0318 12:05:42.068381   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m02:/home/docker/cp-test.txt ha-747000-m03:/home/docker/cp-test_ha-747000-m02_ha-747000-m03.txt: (17.2753004s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m02 "sudo cat /home/docker/cp-test.txt"
E0318 12:05:56.230145   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m02 "sudo cat /home/docker/cp-test.txt": (9.7945541s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m03 "sudo cat /home/docker/cp-test_ha-747000-m02_ha-747000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m03 "sudo cat /home/docker/cp-test_ha-747000-m02_ha-747000-m03.txt": (9.7767194s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m02:/home/docker/cp-test.txt ha-747000-m04:/home/docker/cp-test_ha-747000-m02_ha-747000-m04.txt
E0318 12:06:13.047991   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m02:/home/docker/cp-test.txt ha-747000-m04:/home/docker/cp-test_ha-747000-m02_ha-747000-m04.txt: (16.9773772s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m02 "sudo cat /home/docker/cp-test.txt": (9.6385378s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m04 "sudo cat /home/docker/cp-test_ha-747000-m02_ha-747000-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m04 "sudo cat /home/docker/cp-test_ha-747000-m02_ha-747000-m04.txt": (9.7240632s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 cp testdata\cp-test.txt ha-747000-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 cp testdata\cp-test.txt ha-747000-m03:/home/docker/cp-test.txt: (9.7208486s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m03 "sudo cat /home/docker/cp-test.txt": (9.714807s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3888916617\001\cp-test_ha-747000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3888916617\001\cp-test_ha-747000-m03.txt: (9.8250644s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m03 "sudo cat /home/docker/cp-test.txt": (9.6442709s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m03:/home/docker/cp-test.txt ha-747000:/home/docker/cp-test_ha-747000-m03_ha-747000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m03:/home/docker/cp-test.txt ha-747000:/home/docker/cp-test_ha-747000-m03_ha-747000.txt: (17.1796879s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m03 "sudo cat /home/docker/cp-test.txt": (9.5592562s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000 "sudo cat /home/docker/cp-test_ha-747000-m03_ha-747000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000 "sudo cat /home/docker/cp-test_ha-747000-m03_ha-747000.txt": (9.6127836s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m03:/home/docker/cp-test.txt ha-747000-m02:/home/docker/cp-test_ha-747000-m03_ha-747000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m03:/home/docker/cp-test.txt ha-747000-m02:/home/docker/cp-test_ha-747000-m03_ha-747000-m02.txt: (16.9168768s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m03 "sudo cat /home/docker/cp-test.txt": (9.734015s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m02 "sudo cat /home/docker/cp-test_ha-747000-m03_ha-747000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m02 "sudo cat /home/docker/cp-test_ha-747000-m03_ha-747000-m02.txt": (9.5894513s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m03:/home/docker/cp-test.txt ha-747000-m04:/home/docker/cp-test_ha-747000-m03_ha-747000-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m03:/home/docker/cp-test.txt ha-747000-m04:/home/docker/cp-test_ha-747000-m03_ha-747000-m04.txt: (16.6771318s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m03 "sudo cat /home/docker/cp-test.txt": (9.8237361s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m04 "sudo cat /home/docker/cp-test_ha-747000-m03_ha-747000-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m04 "sudo cat /home/docker/cp-test_ha-747000-m03_ha-747000-m04.txt": (9.7117299s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 cp testdata\cp-test.txt ha-747000-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 cp testdata\cp-test.txt ha-747000-m04:/home/docker/cp-test.txt: (9.6989973s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m04 "sudo cat /home/docker/cp-test.txt": (9.651277s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3888916617\001\cp-test_ha-747000-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3888916617\001\cp-test_ha-747000-m04.txt: (9.6193219s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m04 "sudo cat /home/docker/cp-test.txt": (9.5525299s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m04:/home/docker/cp-test.txt ha-747000:/home/docker/cp-test_ha-747000-m04_ha-747000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m04:/home/docker/cp-test.txt ha-747000:/home/docker/cp-test_ha-747000-m04_ha-747000.txt: (16.6129368s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m04 "sudo cat /home/docker/cp-test.txt": (9.6116265s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000 "sudo cat /home/docker/cp-test_ha-747000-m04_ha-747000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000 "sudo cat /home/docker/cp-test_ha-747000-m04_ha-747000.txt": (9.6764287s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m04:/home/docker/cp-test.txt ha-747000-m02:/home/docker/cp-test_ha-747000-m04_ha-747000-m02.txt
E0318 12:10:42.072258   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m04:/home/docker/cp-test.txt ha-747000-m02:/home/docker/cp-test_ha-747000-m04_ha-747000-m02.txt: (16.7384057s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m04 "sudo cat /home/docker/cp-test.txt": (9.5343577s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m02 "sudo cat /home/docker/cp-test_ha-747000-m04_ha-747000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m02 "sudo cat /home/docker/cp-test_ha-747000-m04_ha-747000-m02.txt": (9.6371353s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m04:/home/docker/cp-test.txt ha-747000-m03:/home/docker/cp-test_ha-747000-m04_ha-747000-m03.txt
E0318 12:11:13.056022   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 cp ha-747000-m04:/home/docker/cp-test.txt ha-747000-m03:/home/docker/cp-test_ha-747000-m04_ha-747000-m03.txt: (16.9486773s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m04 "sudo cat /home/docker/cp-test.txt": (9.574782s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m03 "sudo cat /home/docker/cp-test_ha-747000-m04_ha-747000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 ssh -n ha-747000-m03 "sudo cat /home/docker/cp-test_ha-747000-m04_ha-747000-m03.txt": (9.7081257s)
--- PASS: TestMultiControlPlane/serial/CopyFile (627.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (70.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-747000 node stop m02 -v=7 --alsologtostderr: (34.2254709s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-747000 status -v=7 --alsologtostderr: exit status 7 (36.3152491s)

                                                
                                                
-- stdout --
	ha-747000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-747000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-747000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-747000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 12:12:15.563644   13572 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0318 12:12:15.655842   13572 out.go:291] Setting OutFile to fd 1012 ...
	I0318 12:12:15.662002   13572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:12:15.662002   13572 out.go:304] Setting ErrFile to fd 760...
	I0318 12:12:15.662060   13572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:12:15.675582   13572 out.go:298] Setting JSON to false
	I0318 12:12:15.675582   13572 mustload.go:65] Loading cluster: ha-747000
	I0318 12:12:15.675582   13572 notify.go:220] Checking for updates...
	I0318 12:12:15.676342   13572 config.go:182] Loaded profile config "ha-747000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 12:12:15.676342   13572 status.go:255] checking status of ha-747000 ...
	I0318 12:12:15.678551   13572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 12:12:17.772366   13572 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:12:17.772366   13572 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:12:17.772366   13572 status.go:330] ha-747000 host status = "Running" (err=<nil>)
	I0318 12:12:17.772366   13572 host.go:66] Checking if "ha-747000" exists ...
	I0318 12:12:17.773216   13572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 12:12:19.885773   13572 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:12:19.885773   13572 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:12:19.885773   13572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 12:12:22.426910   13572 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 12:12:22.426910   13572 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:12:22.426910   13572 host.go:66] Checking if "ha-747000" exists ...
	I0318 12:12:22.483851   13572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:12:22.483851   13572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000 ).state
	I0318 12:12:24.565111   13572 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:12:24.565111   13572 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:12:24.565246   13572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000 ).networkadapters[0]).ipaddresses[0]
	I0318 12:12:27.071048   13572 main.go:141] libmachine: [stdout =====>] : 172.30.135.65
	
	I0318 12:12:27.071795   13572 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:12:27.071846   13572 sshutil.go:53] new ssh client: &{IP:172.30.135.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000\id_rsa Username:docker}
	I0318 12:12:27.164521   13572 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.6806349s)
	I0318 12:12:27.177474   13572 ssh_runner.go:195] Run: systemctl --version
	I0318 12:12:27.198326   13572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:12:27.227244   13572 kubeconfig.go:125] found "ha-747000" server: "https://172.30.143.254:8443"
	I0318 12:12:27.227244   13572 api_server.go:166] Checking apiserver status ...
	I0318 12:12:27.237855   13572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:12:27.276955   13572 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2182/cgroup
	W0318 12:12:27.293200   13572 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2182/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:12:27.305785   13572 ssh_runner.go:195] Run: ls
	I0318 12:12:27.314057   13572 api_server.go:253] Checking apiserver healthz at https://172.30.143.254:8443/healthz ...
	I0318 12:12:27.468142   13572 api_server.go:279] https://172.30.143.254:8443/healthz returned 200:
	ok
	I0318 12:12:27.468142   13572 status.go:422] ha-747000 apiserver status = Running (err=<nil>)
	I0318 12:12:27.482137   13572 status.go:257] ha-747000 status: &{Name:ha-747000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:12:27.482137   13572 status.go:255] checking status of ha-747000-m02 ...
	I0318 12:12:27.483179   13572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m02 ).state
	I0318 12:12:29.526969   13572 main.go:141] libmachine: [stdout =====>] : Off
	
	I0318 12:12:29.538389   13572 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:12:29.538389   13572 status.go:330] ha-747000-m02 host status = "Stopped" (err=<nil>)
	I0318 12:12:29.538531   13572 status.go:343] host is not running, skipping remaining checks
	I0318 12:12:29.538531   13572 status.go:257] ha-747000-m02 status: &{Name:ha-747000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:12:29.538531   13572 status.go:255] checking status of ha-747000-m03 ...
	I0318 12:12:29.539355   13572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 12:12:31.572949   13572 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:12:31.572949   13572 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:12:31.573194   13572 status.go:330] ha-747000-m03 host status = "Running" (err=<nil>)
	I0318 12:12:31.573194   13572 host.go:66] Checking if "ha-747000-m03" exists ...
	I0318 12:12:31.574084   13572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 12:12:33.615164   13572 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:12:33.615164   13572 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:12:33.615164   13572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 12:12:36.057557   13572 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 12:12:36.062388   13572 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:12:36.062388   13572 host.go:66] Checking if "ha-747000-m03" exists ...
	I0318 12:12:36.072738   13572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:12:36.072738   13572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m03 ).state
	I0318 12:12:38.109562   13572 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:12:38.109627   13572 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:12:38.109801   13572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m03 ).networkadapters[0]).ipaddresses[0]
	I0318 12:12:40.535425   13572 main.go:141] libmachine: [stdout =====>] : 172.30.129.111
	
	I0318 12:12:40.535502   13572 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:12:40.535502   13572 sshutil.go:53] new ssh client: &{IP:172.30.129.111 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m03\id_rsa Username:docker}
	I0318 12:12:40.636471   13572 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.5637002s)
	I0318 12:12:40.650235   13572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:12:40.672909   13572 kubeconfig.go:125] found "ha-747000" server: "https://172.30.143.254:8443"
	I0318 12:12:40.673007   13572 api_server.go:166] Checking apiserver status ...
	I0318 12:12:40.685621   13572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:12:40.722247   13572 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2547/cgroup
	W0318 12:12:40.738811   13572 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2547/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:12:40.751569   13572 ssh_runner.go:195] Run: ls
	I0318 12:12:40.759814   13572 api_server.go:253] Checking apiserver healthz at https://172.30.143.254:8443/healthz ...
	I0318 12:12:40.768338   13572 api_server.go:279] https://172.30.143.254:8443/healthz returned 200:
	ok
	I0318 12:12:40.770192   13572 status.go:422] ha-747000-m03 apiserver status = Running (err=<nil>)
	I0318 12:12:40.770192   13572 status.go:257] ha-747000-m03 status: &{Name:ha-747000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:12:40.770338   13572 status.go:255] checking status of ha-747000-m04 ...
	I0318 12:12:40.770338   13572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m04 ).state
	I0318 12:12:42.744093   13572 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:12:42.744093   13572 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:12:42.755382   13572 status.go:330] ha-747000-m04 host status = "Running" (err=<nil>)
	I0318 12:12:42.755382   13572 host.go:66] Checking if "ha-747000-m04" exists ...
	I0318 12:12:42.756165   13572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m04 ).state
	I0318 12:12:44.759975   13572 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:12:44.769884   13572 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:12:44.769960   13572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m04 ).networkadapters[0]).ipaddresses[0]
	I0318 12:12:47.177914   13572 main.go:141] libmachine: [stdout =====>] : 172.30.128.97
	
	I0318 12:12:47.188143   13572 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:12:47.188143   13572 host.go:66] Checking if "ha-747000-m04" exists ...
	I0318 12:12:47.205707   13572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:12:47.205707   13572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-747000-m04 ).state
	I0318 12:12:49.217020   13572 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 12:12:49.217020   13572 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:12:49.228328   13572 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-747000-m04 ).networkadapters[0]).ipaddresses[0]
	I0318 12:12:51.595887   13572 main.go:141] libmachine: [stdout =====>] : 172.30.128.97
	
	I0318 12:12:51.605897   13572 main.go:141] libmachine: [stderr =====>] : 
	I0318 12:12:51.605956   13572 sshutil.go:53] new ssh client: &{IP:172.30.128.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-747000-m04\id_rsa Username:docker}
	I0318 12:12:51.715374   13572 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.509634s)
	I0318 12:12:51.726258   13572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:12:51.751084   13572 status.go:257] ha-747000-m04 status: &{Name:ha-747000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (70.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (20.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (20.1424643s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (20.15s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (191.61s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-612300 --driver=hyperv
E0318 12:18:45.281903   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 12:20:42.074913   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 12:21:13.049321   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-612300 --driver=hyperv: (3m11.6111477s)
--- PASS: TestImageBuild/serial/Setup (191.61s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.18s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-612300
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-612300: (9.1770803s)
--- PASS: TestImageBuild/serial/NormalBuild (9.18s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (8.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-612300
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-612300: (8.8951599s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (8.90s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-612300
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-612300: (7.8163837s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.82s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-612300
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-612300: (7.5525297s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (234.21s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-362700 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0318 12:22:36.251043   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
E0318 12:25:42.083302   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 12:26:13.047495   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-362700 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m54.2124436s)
--- PASS: TestJSONOutput/start/Command (234.21s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.61s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-362700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-362700 --output=json --user=testUser: (7.6080657s)
--- PASS: TestJSONOutput/pause/Command (7.61s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.42s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-362700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-362700 --output=json --user=testUser: (7.4172431s)
--- PASS: TestJSONOutput/unpause/Command (7.42s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (33.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-362700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-362700 --output=json --user=testUser: (33.7735338s)
--- PASS: TestJSONOutput/stop/Command (33.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.45s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-610500 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-610500 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (280.8008ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a4364523-b744-4356-b960-e3516214cd2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-610500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5cf76077-36d0-4ed1-a2c2-b7c56aa59171","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube3\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"ba87d814-a4cb-4494-91ef-91a0c19a8b7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"26f2bbf3-fadc-46e7-85bd-720eaf423e91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"9d221de6-c47b-40f1-9119-34a36342837d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18429"}}
	{"specversion":"1.0","id":"e22a306f-092d-4de5-b174-64f310f88ff3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"37816962-f20f-4cee-ae91-cb676ab68a25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 12:27:29.983595   12084 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-610500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-610500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-610500: (1.1703107s)
--- PASS: TestErrorJSONOutput (1.45s)

                                                
                                    
x
+
TestMainNoArgs (0.23s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.23s)

                                                
                                    
x
+
TestMinikubeProfile (509.09s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-408300 --driver=hyperv
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-408300 --driver=hyperv: (3m9.3886897s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-408300 --driver=hyperv
E0318 12:30:42.080556   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 12:31:13.053991   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-408300 --driver=hyperv: (3m14.8862669s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-408300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (21.2188176s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-408300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (20.9685599s)
helpers_test.go:175: Cleaning up "second-408300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-408300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-408300: (41.3804337s)
helpers_test.go:175: Cleaning up "first-408300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-408300
E0318 12:35:25.304822   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 12:35:42.090293   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-408300: (40.3494385s)
--- PASS: TestMinikubeProfile (509.09s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (141.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-611000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0318 12:36:13.066100   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-611000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m20.1973884s)
--- PASS: TestMountStart/serial/StartWithMountFirst (141.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (8.82s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-611000 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-611000 ssh -- ls /minikube-host: (8.8141072s)
--- PASS: TestMountStart/serial/VerifyMountFirst (8.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (142.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-611000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0318 12:39:16.259429   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
E0318 12:40:42.086425   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-611000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m21.909978s)
--- PASS: TestMountStart/serial/StartWithMountSecond (142.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.18s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-611000 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-611000 ssh -- ls /minikube-host: (9.1764362s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.18s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (26.3s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-611000 --alsologtostderr -v=5
E0318 12:41:13.053837   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-611000 --alsologtostderr -v=5: (26.2987637s)
--- PASS: TestMountStart/serial/DeleteFirst (26.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (8.89s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-611000 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-611000 ssh -- ls /minikube-host: (8.8917142s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (8.89s)

                                                
                                    
x
+
TestMountStart/serial/Stop (25.12s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-611000
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-611000: (25.1185766s)
--- PASS: TestMountStart/serial/Stop (25.12s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (111.66s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-611000
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-611000: (1m50.6615491s)
--- PASS: TestMountStart/serial/RestartStopped (111.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.19s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-611000 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-611000 ssh -- ls /minikube-host: (9.1875608s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.19s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (406.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-894400 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0318 12:45:42.097575   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 12:46:13.061270   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
E0318 12:50:42.088413   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-894400 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m23.2481112s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 status --alsologtostderr
E0318 12:51:13.064073   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 status --alsologtostderr: (23.0073964s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (406.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-894400 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-894400 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-894400 -- rollout status deployment/busybox: (3.1241738s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-894400 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-894400 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-894400 -- exec busybox-5b5d89c9d6-8btgf -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-894400 -- exec busybox-5b5d89c9d6-8btgf -- nslookup kubernetes.io: (1.8929752s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-894400 -- exec busybox-5b5d89c9d6-c2997 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-894400 -- exec busybox-5b5d89c9d6-8btgf -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-894400 -- exec busybox-5b5d89c9d6-c2997 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-894400 -- exec busybox-5b5d89c9d6-8btgf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-894400 -- exec busybox-5b5d89c9d6-c2997 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (217.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-894400 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-894400 -v 3 --alsologtostderr: (3m3.6730533s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 status --alsologtostderr
E0318 12:55:42.103199   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 12:55:56.280979   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 status --alsologtostderr: (34.2939765s)
--- PASS: TestMultiNode/serial/AddNode (217.97s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-894400 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (11.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (11.7938448s)
--- PASS: TestMultiNode/serial/ProfileList (11.80s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (346.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 status --output json --alsologtostderr
E0318 12:56:13.065175   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 status --output json --alsologtostderr: (34.1669416s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 cp testdata\cp-test.txt multinode-894400:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 cp testdata\cp-test.txt multinode-894400:/home/docker/cp-test.txt: (9.1414177s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400 "sudo cat /home/docker/cp-test.txt": (9.0338618s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 cp multinode-894400:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile2306069937\001\cp-test_multinode-894400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 cp multinode-894400:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile2306069937\001\cp-test_multinode-894400.txt: (9.2221018s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400 "sudo cat /home/docker/cp-test.txt": (9.1333873s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 cp multinode-894400:/home/docker/cp-test.txt multinode-894400-m02:/home/docker/cp-test_multinode-894400_multinode-894400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 cp multinode-894400:/home/docker/cp-test.txt multinode-894400-m02:/home/docker/cp-test_multinode-894400_multinode-894400-m02.txt: (15.8953259s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400 "sudo cat /home/docker/cp-test.txt": (9.0512858s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m02 "sudo cat /home/docker/cp-test_multinode-894400_multinode-894400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m02 "sudo cat /home/docker/cp-test_multinode-894400_multinode-894400-m02.txt": (9.0466712s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 cp multinode-894400:/home/docker/cp-test.txt multinode-894400-m03:/home/docker/cp-test_multinode-894400_multinode-894400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 cp multinode-894400:/home/docker/cp-test.txt multinode-894400-m03:/home/docker/cp-test_multinode-894400_multinode-894400-m03.txt: (15.7278929s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400 "sudo cat /home/docker/cp-test.txt": (9.0775218s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m03 "sudo cat /home/docker/cp-test_multinode-894400_multinode-894400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m03 "sudo cat /home/docker/cp-test_multinode-894400_multinode-894400-m03.txt": (9.0162815s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 cp testdata\cp-test.txt multinode-894400-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 cp testdata\cp-test.txt multinode-894400-m02:/home/docker/cp-test.txt: (8.992196s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m02 "sudo cat /home/docker/cp-test.txt": (8.9796075s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 cp multinode-894400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile2306069937\001\cp-test_multinode-894400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 cp multinode-894400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile2306069937\001\cp-test_multinode-894400-m02.txt: (9.0878549s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m02 "sudo cat /home/docker/cp-test.txt": (9.1687598s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 cp multinode-894400-m02:/home/docker/cp-test.txt multinode-894400:/home/docker/cp-test_multinode-894400-m02_multinode-894400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 cp multinode-894400-m02:/home/docker/cp-test.txt multinode-894400:/home/docker/cp-test_multinode-894400-m02_multinode-894400.txt: (15.8757545s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m02 "sudo cat /home/docker/cp-test.txt": (9.0890415s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400 "sudo cat /home/docker/cp-test_multinode-894400-m02_multinode-894400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400 "sudo cat /home/docker/cp-test_multinode-894400-m02_multinode-894400.txt": (9.0080725s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 cp multinode-894400-m02:/home/docker/cp-test.txt multinode-894400-m03:/home/docker/cp-test_multinode-894400-m02_multinode-894400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 cp multinode-894400-m02:/home/docker/cp-test.txt multinode-894400-m03:/home/docker/cp-test_multinode-894400-m02_multinode-894400-m03.txt: (15.8480209s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m02 "sudo cat /home/docker/cp-test.txt": (9.1096818s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m03 "sudo cat /home/docker/cp-test_multinode-894400-m02_multinode-894400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m03 "sudo cat /home/docker/cp-test_multinode-894400-m02_multinode-894400-m03.txt": (9.0803545s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 cp testdata\cp-test.txt multinode-894400-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 cp testdata\cp-test.txt multinode-894400-m03:/home/docker/cp-test.txt: (9.0622293s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m03 "sudo cat /home/docker/cp-test.txt": (9.0614365s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 cp multinode-894400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile2306069937\001\cp-test_multinode-894400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 cp multinode-894400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile2306069937\001\cp-test_multinode-894400-m03.txt: (9.0697748s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m03 "sudo cat /home/docker/cp-test.txt"
E0318 13:00:42.102095   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m03 "sudo cat /home/docker/cp-test.txt": (9.1030376s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 cp multinode-894400-m03:/home/docker/cp-test.txt multinode-894400:/home/docker/cp-test_multinode-894400-m03_multinode-894400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 cp multinode-894400-m03:/home/docker/cp-test.txt multinode-894400:/home/docker/cp-test_multinode-894400-m03_multinode-894400.txt: (15.9175946s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m03 "sudo cat /home/docker/cp-test.txt"
E0318 13:01:13.065233   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m03 "sudo cat /home/docker/cp-test.txt": (8.9621909s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400 "sudo cat /home/docker/cp-test_multinode-894400-m03_multinode-894400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400 "sudo cat /home/docker/cp-test_multinode-894400-m03_multinode-894400.txt": (9.1225923s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 cp multinode-894400-m03:/home/docker/cp-test.txt multinode-894400-m02:/home/docker/cp-test_multinode-894400-m03_multinode-894400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 cp multinode-894400-m03:/home/docker/cp-test.txt multinode-894400-m02:/home/docker/cp-test_multinode-894400-m03_multinode-894400-m02.txt: (15.7349689s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m03 "sudo cat /home/docker/cp-test.txt": (8.9574315s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m02 "sudo cat /home/docker/cp-test_multinode-894400-m03_multinode-894400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 ssh -n multinode-894400-m02 "sudo cat /home/docker/cp-test_multinode-894400-m03_multinode-894400-m02.txt": (9.0092789s)
--- PASS: TestMultiNode/serial/CopyFile (346.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (73.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 node stop m03: (23.8540647s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-894400 status: exit status 7 (24.8892853s)

                                                
                                                
-- stdout --
	multinode-894400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-894400-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-894400-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:02:22.822027   11560 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-894400 status --alsologtostderr: exit status 7 (24.9387632s)

                                                
                                                
-- stdout --
	multinode-894400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-894400-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-894400-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:02:47.702206    5484 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0318 13:02:47.786241    5484 out.go:291] Setting OutFile to fd 728 ...
	I0318 13:02:47.787240    5484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:02:47.787240    5484 out.go:304] Setting ErrFile to fd 720...
	I0318 13:02:47.787240    5484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:02:47.802579    5484 out.go:298] Setting JSON to false
	I0318 13:02:47.802673    5484 mustload.go:65] Loading cluster: multinode-894400
	I0318 13:02:47.802673    5484 notify.go:220] Checking for updates...
	I0318 13:02:47.803468    5484 config.go:182] Loaded profile config "multinode-894400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 13:02:47.803534    5484 status.go:255] checking status of multinode-894400 ...
	I0318 13:02:47.804371    5484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:02:49.857504    5484 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:02:49.857602    5484 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:02:49.857688    5484 status.go:330] multinode-894400 host status = "Running" (err=<nil>)
	I0318 13:02:49.857688    5484 host.go:66] Checking if "multinode-894400" exists ...
	I0318 13:02:49.858509    5484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:02:51.906763    5484 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:02:51.907182    5484 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:02:51.907268    5484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:02:54.381204    5484 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 13:02:54.381204    5484 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:02:54.381204    5484 host.go:66] Checking if "multinode-894400" exists ...
	I0318 13:02:54.393395    5484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:02:54.393395    5484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400 ).state
	I0318 13:02:56.440817    5484 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:02:56.440817    5484 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:02:56.440817    5484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400 ).networkadapters[0]).ipaddresses[0]
	I0318 13:02:58.873028    5484 main.go:141] libmachine: [stdout =====>] : 172.30.129.141
	
	I0318 13:02:58.873028    5484 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:02:58.873768    5484 sshutil.go:53] new ssh client: &{IP:172.30.129.141 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400\id_rsa Username:docker}
	I0318 13:02:58.970059    5484 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.5765213s)
	I0318 13:02:58.982644    5484 ssh_runner.go:195] Run: systemctl --version
	I0318 13:02:59.002629    5484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:02:59.027307    5484 kubeconfig.go:125] found "multinode-894400" server: "https://172.30.129.141:8443"
	I0318 13:02:59.027390    5484 api_server.go:166] Checking apiserver status ...
	I0318 13:02:59.038017    5484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:02:59.072464    5484 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2193/cgroup
	W0318 13:02:59.088206    5484 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2193/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:02:59.100479    5484 ssh_runner.go:195] Run: ls
	I0318 13:02:59.108240    5484 api_server.go:253] Checking apiserver healthz at https://172.30.129.141:8443/healthz ...
	I0318 13:02:59.116850    5484 api_server.go:279] https://172.30.129.141:8443/healthz returned 200:
	ok
	I0318 13:02:59.116850    5484 status.go:422] multinode-894400 apiserver status = Running (err=<nil>)
	I0318 13:02:59.116850    5484 status.go:257] multinode-894400 status: &{Name:multinode-894400 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:02:59.117160    5484 status.go:255] checking status of multinode-894400-m02 ...
	I0318 13:02:59.117953    5484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:03:01.173852    5484 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:03:01.173852    5484 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:03:01.173852    5484 status.go:330] multinode-894400-m02 host status = "Running" (err=<nil>)
	I0318 13:03:01.174465    5484 host.go:66] Checking if "multinode-894400-m02" exists ...
	I0318 13:03:01.175146    5484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:03:03.286699    5484 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:03:03.286905    5484 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:03:03.286983    5484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:03:05.783672    5484 main.go:141] libmachine: [stdout =====>] : 172.30.140.66
	
	I0318 13:03:05.784084    5484 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:03:05.784084    5484 host.go:66] Checking if "multinode-894400-m02" exists ...
	I0318 13:03:05.797649    5484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:03:05.798177    5484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m02 ).state
	I0318 13:03:07.878145    5484 main.go:141] libmachine: [stdout =====>] : Running
	
	I0318 13:03:07.878337    5484 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:03:07.878413    5484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-894400-m02 ).networkadapters[0]).ipaddresses[0]
	I0318 13:03:10.332376    5484 main.go:141] libmachine: [stdout =====>] : 172.30.140.66
	
	I0318 13:03:10.333288    5484 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:03:10.333457    5484 sshutil.go:53] new ssh client: &{IP:172.30.140.66 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-894400-m02\id_rsa Username:docker}
	I0318 13:03:10.428928    5484 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.6298596s)
	I0318 13:03:10.440543    5484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:03:10.473049    5484 status.go:257] multinode-894400-m02 status: &{Name:multinode-894400-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:03:10.473117    5484 status.go:255] checking status of multinode-894400-m03 ...
	I0318 13:03:10.473867    5484 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-894400-m03 ).state
	I0318 13:03:12.495649    5484 main.go:141] libmachine: [stdout =====>] : Off
	
	I0318 13:03:12.496263    5484 main.go:141] libmachine: [stderr =====>] : 
	I0318 13:03:12.496263    5484 status.go:330] multinode-894400-m03 host status = "Stopped" (err=<nil>)
	I0318 13:03:12.496263    5484 status.go:343] host is not running, skipping remaining checks
	I0318 13:03:12.496263    5484 status.go:257] multinode-894400-m03 status: &{Name:multinode-894400-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (73.68s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (177.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 node start m03 -v=7 --alsologtostderr: (2m23.6531299s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-894400 status -v=7 --alsologtostderr
E0318 13:05:42.107140   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-894400 status -v=7 --alsologtostderr: (34.006691s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (177.83s)

                                                
                                    
x
+
TestPreload (450.35s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-675500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0318 13:20:42.115121   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 13:21:13.085285   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-675500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (3m31.8527827s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-675500 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-675500 image pull gcr.io/k8s-minikube/busybox: (7.8678463s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-675500
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-675500: (37.0327603s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-675500 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-675500 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m27.1426094s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-675500 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-675500 image list: (6.9101985s)
helpers_test.go:175: Cleaning up "test-preload-675500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-675500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-675500: (39.508273s)
--- PASS: TestPreload (450.35s)

                                                
                                    
x
+
TestScheduledStopWindows (322.66s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-614900 --memory=2048 --driver=hyperv
E0318 13:25:25.351940   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 13:25:42.106287   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 13:26:13.080152   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-614900 --memory=2048 --driver=hyperv: (3m10.1831508s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-614900 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-614900 --schedule 5m: (10.3278951s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-614900 -n scheduled-stop-614900
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-614900 -n scheduled-stop-614900: exit status 1 (10.0151747s)

                                                
                                                
** stderr ** 
	W0318 13:28:35.704883    2544 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-614900 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-614900 -- sudo systemctl show minikube-scheduled-stop --no-page: (8.9914621s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-614900 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-614900 --schedule 5s: (10.2141331s)
E0318 13:29:16.313421   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-614900
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-614900: exit status 7 (2.195831s)

                                                
                                                
-- stdout --
	scheduled-stop-614900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:30:04.944895    7848 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-614900 -n scheduled-stop-614900
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-614900 -n scheduled-stop-614900: exit status 7 (2.3192515s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:30:07.138632   10156 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-614900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-614900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-614900: (28.4012682s)
--- PASS: TestScheduledStopWindows (322.66s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1046.52s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.243191047.exe start -p running-upgrade-692300 --memory=2200 --vm-driver=hyperv
E0318 13:30:42.110488   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-611000\client.crt: The system cannot find the path specified.
E0318 13:31:13.076063   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.243191047.exe start -p running-upgrade-692300 --memory=2200 --vm-driver=hyperv: (8m14.2451165s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-692300 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-692300 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (7m56.3492059s)
helpers_test.go:175: Cleaning up "running-upgrade-692300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-692300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-692300: (1m14.95271s)
--- PASS: TestRunningBinaryUpgrade (1046.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-692300 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-692300 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (376.6993ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-692300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 13:30:37.903532    2764 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (837.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.394112580.exe start -p stopped-upgrade-258700 --memory=2200 --vm-driver=hyperv
E0318 13:36:13.086540   13424 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-209500\client.crt: The system cannot find the path specified.
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.394112580.exe start -p stopped-upgrade-258700 --memory=2200 --vm-driver=hyperv: (6m42.5725045s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.394112580.exe -p stopped-upgrade-258700 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.394112580.exe -p stopped-upgrade-258700 stop: (34.5647471s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-258700 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-258700 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m40.3859934s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (837.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (8.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-258700
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-258700: (8.8290107s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (8.83s)

                                                
                                    

Test skip (32/210)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-611000 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-611000 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 4788: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-611000 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-611000 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0348616s)

                                                
                                                
-- stdout --
	* [functional-611000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:37:56.019480    4976 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0318 11:37:56.131332    4976 out.go:291] Setting OutFile to fd 940 ...
	I0318 11:37:56.132286    4976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:37:56.132286    4976 out.go:304] Setting ErrFile to fd 912...
	I0318 11:37:56.132286    4976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:37:56.160467    4976 out.go:298] Setting JSON to false
	I0318 11:37:56.167125    4976 start.go:129] hostinfo: {"hostname":"minikube3","uptime":310453,"bootTime":1710451423,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0318 11:37:56.167125    4976 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 11:37:56.171317    4976 out.go:177] * [functional-611000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0318 11:37:56.174362    4976 notify.go:220] Checking for updates...
	I0318 11:37:56.177982    4976 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 11:37:56.180837    4976 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 11:37:56.188514    4976 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0318 11:37:56.193109    4976 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 11:37:56.197037    4976 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 11:37:56.206048    4976 config.go:182] Loaded profile config "functional-611000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:37:56.207580    4976 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-611000 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-611000 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0317022s)

                                                
                                                
-- stdout --
	* [functional-611000] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0318 11:37:51.008247    4320 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0318 11:37:51.107257    4320 out.go:291] Setting OutFile to fd 1008 ...
	I0318 11:37:51.108081    4320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:37:51.108081    4320 out.go:304] Setting ErrFile to fd 932...
	I0318 11:37:51.108167    4320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 11:37:51.131115    4320 out.go:298] Setting JSON to false
	I0318 11:37:51.135544    4320 start.go:129] hostinfo: {"hostname":"minikube3","uptime":310448,"bootTime":1710451423,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0318 11:37:51.135544    4320 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 11:37:51.141694    4320 out.go:177] * [functional-611000] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0318 11:37:51.147129    4320 notify.go:220] Checking for updates...
	I0318 11:37:51.150085    4320 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0318 11:37:51.152951    4320 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 11:37:51.157347    4320 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0318 11:37:51.161415    4320 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 11:37:51.166168    4320 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 11:37:51.175252    4320 config.go:182] Loaded profile config "functional-611000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 11:37:51.176888    4320 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard